@
aveline 关于 proxy_cache_lock 的问题,今天又研究了一下。总算是找到
Nginx.org 官方的 Maxim Dounin 的答复了:
https://www.ruby-forum.com/topic/5010940Hello!
On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote:
> being seemingly why proxy_cache_lock was introduced, as you initially suggested.
Again: responses are not guaranteed to be the same, and unless
you are using cache (and hence proxy_cache_key and various header
checks to ensure responses are at least interchangeable), the only
thing you can do is to proxy requests one by one.
If you are using cache, then there is proxy_cache_key to identify
a resource requested, and proxy_cache_lock to prevent multiple
parallel requests to populate the same cache node (and
"proxy_cache_use_stale updating" to prevent multiple requests when
updating a cache node).
In theory, cache code can be improved (compared to what we
currently have) to introduce sending of a response being loaded
into a cache to multiple clients. I.e., stop waiting for a cache
lock once we've got the response headers, and stream the response
body being load to all clients waited for it. This should/can
help when loading large files into a cache, when waiting with
proxy_cache_lock for a complete response isn't cheap. In
practice, introducing such a code isn't cheap either, and it's not
about using other names for temporary files.
--
Maxim Dounin
http://nginx.org/好想悬赏找人做这个功能。