关于 NGINX 的 proxy_cache_lock 的行为 - V2EX
V2EX = way to explore
V2EX 是一个关于分享和探索的地方
现在注册
已注册用户请  登录
Livid
193.59D
603.42D
V2EX    NGINX

关于 NGINX 的 proxy_cache_lock 的行为

  •  
  •   Livid
    PRO
    2014-07-05 14:26:49 +08:00 10869 次点击
    这是一个创建于 4190 天前的主题,其中的信息可能已经有所发展或是发生改变。
    http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache_lock

    有人了解这个 directive 打开之后的效果么?

    proxy_cache_lock on;

    根据文档,貌似如果一个大文件需要回源,那么打开之后,在同一时间就只会发起一个到源站的请求,然后其他对同样文件的请求就会等这个请求先完成。

    但是这样的话,如果这个文件非常非常大,需要几分钟才能下载完的话,难道其他请求也就干等几分钟吗?
    9 条回复    2014-10-17 20:51:05 +08:00
    aveline
        1
    aveline  
       2014-07-05 14:41:19 +08:00
    是阻塞的 ... 所以这个功能很鸡肋。

    是想实现 Squid 里的 Collapsed Forwarding 功能么?
    Livid
        2
    Livid  
    MOD
    OP
    PRO
       2014-07-05 14:44:41 +08:00
    @aveline 在测试到底是怎样的。

    这里有篇博客说和 Squid 的类似:

    https://blog.feuvan.net/2013/07/12/10135-nginx-proxy-cache-lock.html
    lsylsy2
        3
    lsylsy2  
       2014-07-05 14:46:59 +08:00
    一直觉着Nginx的缓存做的不好,只用来做负载均衡,缓存还是交给Varnish和Squid
    alex321
        4
    alex321  
       2014-07-05 15:00:20 +08:00
    我在网上照抄了一堆做了基于 sslstart 的 google 反向代理给身边的小伙伴们使用,设置了 cache 2g,可能是访问量很小,目前还没遇到什么症状,也不知道这个缓存的价值如何。
    aveline
        5
    aveline  
       2014-07-05 15:07:01 +08:00
    在本地重新测试了一下,proxy_cache_lock 打开的时候,另一个同样 cache key 的请求在到达 proxy_cache_lock_timeout 之前是不会发起的。
    Livid
        6
    Livid  
    MOD
    OP
    PRO
       2014-07-05 15:13:03 +08:00   1
    aveline
        7
    aveline  
       2014-07-05 15:22:10 +08:00
    @Livid 赞,看了下里面是用 redis 的 subscribe 实现的,太机智了 ...
    Livid
        8
    Livid  
    MOD
    OP
    PRO
       2014-10-17 20:49:15 +08:00
    @aveline 关于 proxy_cache_lock 的问题,今天又研究了一下。总算是找到 Nginx.org 官方的 Maxim Dounin 的答复了:

    https://www.ruby-forum.com/topic/5010940

    Hello!

    On Mon, Jun 30, 2014 at 11:10:52PM -0400, Paul Schlie wrote:

    > being seemingly why proxy_cache_lock was introduced, as you initially suggested.
    Again: responses are not guaranteed to be the same, and unless
    you are using cache (and hence proxy_cache_key and various header
    checks to ensure responses are at least interchangeable), the only
    thing you can do is to proxy requests one by one.

    If you are using cache, then there is proxy_cache_key to identify
    a resource requested, and proxy_cache_lock to prevent multiple
    parallel requests to populate the same cache node (and
    "proxy_cache_use_stale updating" to prevent multiple requests when
    updating a cache node).

    In theory, cache code can be improved (compared to what we
    currently have) to introduce sending of a response being loaded
    into a cache to multiple clients. I.e., stop waiting for a cache
    lock once we've got the response headers, and stream the response
    body being load to all clients waited for it. This should/can
    help when loding large files into a cache, when waiting with
    proxy_cache_lock for a complete response isn't cheap. In
    practice, introducing such a code isn't cheap either, and it's not
    about using other names for temporary files.

    --
    Maxim Dounin
    http://nginx.org/

    好想悬赏找人做这个功能。
    Livid
        9
    Livid  
    MOD
    OP
    PRO
       2014-10-17 20:51:05 +08:00
    嗯,必须找公司申请资源把这个功能做了。
    关于     帮助文档     自助推广系统     博客     API     FAQ     Solana     5704 人在线   最高记录 6679       Select Language
    创意工作者们的社区
    World is powered by solitude
    VERSION: 3.9.8.5 50ms UTC 01:46 PVG 09:46 LAX 17:46 JFK 20:46
    Do have faith in what you're doing.
    ubao msn snddm index pchome yahoo rakuten mypaper meadowduck bidyahoo youbao zxmzxm asda bnvcg cvbfg dfscv mmhjk xxddc yybgb zznbn ccubao uaitu acv GXCV ET GDG YH FG BCVB FJFH CBRE CBC GDG ET54 WRWR RWER WREW WRWER RWER SDG EW SF DSFSF fbbs ubao fhd dfg ewr dg df ewwr ewwr et ruyut utut dfg fgd gdfgt etg dfgt dfgd ert4 gd fgg wr 235 wer3 we vsdf sdf gdf ert xcv sdf rwer hfd dfg cvb rwf afb dfh jgh bmn lgh rty gfds cxv xcv xcs vdas fdf fgd cv sdf tert sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf sdf shasha9178 shasha9178 shasha9178 shasha9178 shasha9178 liflif2 liflif2 liflif2 liflif2 liflif2 liblib3 liblib3 liblib3 liblib3 liblib3 zhazha444 zhazha444 zhazha444 zhazha444 zhazha444 dende5 dende denden denden2 denden21 fenfen9 fenf619 fen619 fenfe9 fe619 sdf sdf sdf sdf sdf zhazh90 zhazh0 zhaa50 zha90 zh590 zho zhoz zhozh zhozho zhozho2 lislis lls95 lili95 lils5 liss9 sdf0ty987 sdft876 sdft9876 sdf09876 sd0t9876 sdf0ty98 sdf0976 sdf0ty986 sdf0ty96 sdf0t76 sdf0876 df0ty98 sf0t876 sd0ty76 sdy76 sdf76 sdf0t76 sdf0ty9 sdf0ty98 sdf0ty987 sdf0ty98 sdf6676 sdf876 sd876 sd876 sdf6 sdf6 sdf9876 sdf0t sdf06 sdf0ty9776 sdf0ty9776 sdf0ty76 sdf8876 sdf0t sd6 sdf06 s688876 sd688 sdf86