resource.1.66.232363Q...

The Rice XA21-Binding Protein 25 Is an Ankyrin Repeat-Containing Protein and Required for XA21-Mediated Disease ResistanceFrom r at roze.lv
1 01:49:38 2011
From: r at roze.lv (Reinis Rozitis)
Date: Sat, 1 Oct :38 +0300
Subject: Question regarding Nginx Configuration
In-Reply-To:
References:
Message-ID:
> Is it possible for location blocks to exist within the main http block at
> a higher level than the server block?
No, but if you have a lots of repeating configuration/locations in the
server blocks you can move those to a seperate file and then use include (
http://wiki.nginx.org/CoreModule#include ) instead.
From igor at sysoev.ru
1 06:07:02 2011
From: igor at sysoev.ru (Igor Sysoev)
Date: Sat, 1 Oct :02 +0400
Subject: nginx-1.0.8
Message-ID:
Changes with nginx 1.0.8
01 Oct 2011
*) Bugfix: nginx could not be built --with-http_mp4_module and without
--with-debug option.
Igor Sysoev
From liseen.wan
1 08:34:37 2011
From: liseen.wan
Date: Sat, 1 Oct :37 +0800
Subject: HttpHealthcheckModule server not marked down
In-Reply-To:
References:
Message-ID:
On Sat, Oct 1, 2011 at 7:16 AM, liseen
> Please try:
> /liseen/healthcheck_nginx_upstreams/blob/master/healthcheck.patch
> patch -p1
./configure ....
> if you use healthcheck with upstream hash, please compile with branch
> support_http_healthchecks of cep21's fork
> /cep21/nginx_upstream_hash/tree/support_http_healthchecks
if all upstreams' backends are down(healthcheck),
cep's upstream_hash will
ignore Healthcheck,
if it is not you need, Please try:
/liseen/nginx_upstream_hash
If you find something wrong,
please open an issue on github. thanks.
> On Sat, Oct 1, 2011 at 6:06 AM, liseen
>> It is a bug.
>> the ngx_upstream_get_peer only check the
>> check i itself.
>> I used my nginx patch for healthcheck,
I have used it in production more
>> than half a year. I will upload it to my github in some hours.
>> On Fri, Sep 23, 2011 at 4:34 AM, Sjaak Pieterse wrote:
>>> Hi there,
>>> i'm trying to use the HttpHealthcheckModule for nginx, but i have some
>>> troubles with it.
>>> i have two servers in my upstream, when sabotaging the health for one
>>> server i see in the status view of healthcheck that the server is
>>> down(1), but if i go to the website i'm checking i still come out on
>>> it and see a broken page.
>>> how can i arrange that the server automatically is marked as down when
>>> the check fails?
>>> sorry for my bad english and maybe noob questions.
>>> config:
upstream www-health{
server x.x.x.1 ;
server x.x.x.2 ;
healthcheck_
healthcheck_delay 10000 ;
healthcheck_timeout 1000;
healthcheck_failcount 2;
#healthcheck_expected 'I_AM_ALIVE';
#Important: HTTP/1.0
healthcheck_send "GET / HTTP/1.0" 'Host: health.'
>>> 'Conection: close' ;
>>> nginx: nginx version: nginx/1.0.6
>>> nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
>>> nginx: TLS SNI support enabled
>>> nginx: configure arguments: --with-http_ssl_module
>>> --add-module=/gnosek-nginx-upstream-fair-2131c73
>>> --with-http_stub_status_module
>>> --add-module=/cep21-healthcheck_nginx_upstreams-b33a846
>>> --prefix=/usr/local/nginx-1.0.6 --with-debug
>>> peckhardt at test-nginx:~/nginx-1.0.6$patch -p1 >> /cep21-healthcheck_nginx_upstreams-5fa4bff/nginx.patch
>>> hope someone would help me.
>>> greetings
>>> _______________________________________________
>>> nginx mailing list
>>> nginx at nginx.org
>>> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
From nginx-forum at nginx.us
1 11:50:06 2011
From: nginx-forum at nginx.us (siloan)
Date: Sat, 01 Oct :06 -0400
Subject: GET issue
Message-ID:
Hi, i have a application that through sockets sends some data to my web
page like touching link.php?name=john&age=32, with apache everything is
ok that values are grabbed and inserted in the db but with nginx(all
versions since 1.0.4) there is no action although i see in logs that my
app is touching that link .. any ideas please?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,153#msg-216153
From nginx-forum at nginx.us
1 12:07:03 2011
From: nginx-forum at nginx.us (dougconran)
Date: Sat, 01 Oct :03 -0400
Subject: Question regarding Nginx Configuration
In-Reply-To:
References:
Message-ID:
Yes, I thought of that and it would certainly reduce the amount of
duplication.
However I do think that there are times when location
directives apply to the whole server (the real server, that is, not the
virtual host) and it would be nice to be able to use them at the higher
Maybe that is one for the wishlist?
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,154#msg-216154
From admin at hostmsr.net
2 07:51:10 2011
From: admin at hostmsr.net (Hostmsr.net)
Date: Sun, 2 Oct :10 +0200
Subject: Internal Server Error
Message-ID:
I have big problem from seven days
All my websites down and this massage appeared
Internal Server Error
The server encountered an internal error or misconfiguration and was unable to complete your request.
Please contact the server administrator, and inform them of the time the error occurred, and anything you might have done that may have caused the error.
More information about this error may be available in the server error log.
Additionally, a 404 Not Found error was encountered while trying to use an ErrorDocument to handle the request.
I send to cpanel support and he said
I went to the URL provided, and see an error related to nginx. Please note that we are unable to provide support for mod_rpaf or nginx, as they are 3rd party products. Once nginx is disabled, if you are still experiencing issues with your sites, please let me know so I can look into this further.
when i open the logs
grep "Oct 01" /etc/httpd/logs/error_log |grep -v "File does not exist" |tail -50
This errors appeared
[Sat Oct 01 18:24:01 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:01 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:02 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:02 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:02 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:02 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:02 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:02 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:03 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:03 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:03 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:03 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:03 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:04 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
[Sat Oct 01 18:24:04 2011] [notice] cannot use a full URL in a 401 ErrorDocument directive --- ignoring!
I Tried to restart nginx but this error appeared
[alert]: could not open error log file: open() "/var
/log/nginx/error.log" failed (30: Read-only file system)
the configuration file /etc/nginx/nginx.conf syntax is ok
09:26:31 [emerg] 20667#0: open() "/var/run/nginx.pid" failed (30: Read-only file system)
configuration file /etc/nginx/nginx.conf test failed
Can any one help me in my problem ?
iam waiting you
thanks alot
-------------- next part --------------
An HTML attachment was scrubbed...
From nginx-forum at nginx.us
2 12:34:20 2011
From: nginx-forum at nginx.us (tcbarrett)
Date: Sun, 02 Oct :20 -0400
Subject: Best way to burst static file cache
In-Reply-To:
References:
Message-ID:
kaspars Wrote:
-------------------------------------------------------
> Well, if you set cache to expire 30 years from now
> the browser should honor that, that is if no
> Last-Modified is set.
I have it set to 30m (30 minutes, right?). If I update, for example,
some css files and then a do a hard refresh on a) a Mac then I get the
new css or b) Windows 7 then I keep getting the cached css files.
Is there any technique to burst this caching. It might not be a 'pure
nginx' solution, but anything that could be supported by an nginx config
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,179#msg-216179
From igor at sysoev.ru
2 13:15:16 2011
From: igor at sysoev.ru (Igor Sysoev)
Date: Sun, 2 Oct :16 +0400
Subject: Question regarding Nginx Configuration
In-Reply-To:
References:
Message-ID:
On Sat, Oct 01, 2011 at 08:07:03AM -0400, dougconran wrote:
> Yes, I thought of that and it would certainly reduce the amount of
> duplication.
However I do think that there are times when location
> directives apply to the whole server (the real server, that is, not the
> virtual host) and it would be nice to be able to use them at the higher
> Maybe that is one for the wishlist?
Locations, Directories, and other blocks in the global server
configuration is one of features I never liked in Apache, so
this is reason why they were not implemented in nginx.
Igor Sysoev
From lists
2 13:15:46 2011
From: lists
Date: Sun, 02 Oct :46 +0100
Subject: Best way to burst static file cache
In-Reply-To:
References:
Message-ID:
On 02/10/, tcbarrett wrote:
> kaspars Wrote:
> -------------------------------------------------------
>> Well, if you set cache to expire 30 years from now
>> the browser should honor that, that is if no
>> Last-Modified is set.
> I have it set to 30m (30 minutes, right?). If I update, for example,
> some css files and then a do a hard refresh on a) a Mac then I get the
> new css or b) Windows 7 then I keep getting the cached css files.
> Is there any technique to burst this caching. It might not be a 'pure
> nginx' solution, but anything that could be supported by an nginx config
> setting?
What everyone else does is set the URLs to be unique and then change the
url when the asset is updated.
eg a simple example which rails uses and
arguably isn't perfect would be:
/assets/blah.jpg?
The random string after the ? could be generated in various ways, eg
incrementing counter or it could be the epoch time of the file mtime (ie
age in seconds)
Now you can set expire time to 30 years and when the asset is updated
you simply arrange for the url to be updated in the html and a "new"
image is pulled down
Implementation left to the reader as an exercise...
From kworthington
2 15:11:33 2011
From: kworthington
(Kevin Worthington)
Date: Sun, 2 Oct :33 -0400
Subject: nginx-1.0.8
In-Reply-To:
References:
Message-ID:
Hello Nginx Users,
I just released Nginx 1.0.8 For Windows http://goo.gl/tq6e3 (32-bit and
These versions are to support legacy users who are already using Cygwin
based builds of Nginx. Official Windows binaries are at nginx.org
Kevin Worthington
kworthington (at] gmail {dot) com
On Sat, Oct 1, 2011 at 2:07 AM, Igor Sysoev
> Changes with nginx 1.0.8
*) Bugfix: nginx could not be built --with-http_mp4_module and without
--with-debug option.
> Igor Sysoev
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
From nginx at nginxuser.net
2 18:38:18 2011
From: nginx at nginxuser.net (Nginx User)
Date: Sun, 2 Oct :18 +0300
Subject: NGINX On REDHAT LINUX 5
In-Reply-To:
References:
Message-ID:
On 29 September , Sergey Budnevitch
> On 28.09.2011, at 20:25, RFGuerengomba
> > I installed NGINX on Ubuntu before and it was working fine, but recently
> > our company decided to move to RedHat Linux. I am having little
> difficulty.
> > In Ubuntu under the NGINX system folder I have the folders called
> > "sites-available" and "sites-enabled"
> > I seem not to have that in REDHAT Linux. Can you please help me to
> > understand better?
> I am answering to nginx at nginx.org instead of nginx-devel, cause
> nginx-devel
> is incorrect maillist for questions of this kind
> In redhat linux you could place virtual sites configs in /etc/nginx/conf.d/
> files with .conf extension.
> ubuntu nginx package has "include /etc/nginx/sites-" directive
> in default nginx.conf, redhat one has "include /etc/nginx/conf.d/*.".
> > Roger F. Guerengomba
> > Senior System Administrator
> > JLG Industires, Inc.
> > 1 JLG Drive
> > McConnellsburg, PA 17233
> > 717-485-6906
> > rfguerengomba
> > ***********************************************************************
> > The information contained in this transmission is confidential.
> > intended solely for the use of the individual(s) or organization(s) to
> > whom it is addressed.
Any disclosure, copying or further distribution
> > is not permitted unless such privilege is explicitly granted in writing
> > by JLG Industries, Inc.
> > Further, JLG Industries, Inc. is not responsible for the proper and
> > complete transmission of the substance of this communication nor for
> > any delay in its receipt.
Follow this guide to build from source:
//centos-installing-nginx-from-source
-------------- next part --------------
An HTML attachment was scrubbed...
From gary.wilson
2 22:07:23 2011
From: gary.wilson
(Gary Wilson Jr.)
Date: Sun, 2 Oct :23 -0500
Subject: retrying proxy connect timeouts but not read timeouts
Message-ID:
I've got a proxy + upstream setup with three defined upstream
I would like nginx to retry requests when there is an error
or when there is a connection timeout (but not a read timeout).
However, it appears that proxy_next_upstream only has the granularity
of "error" and "timeout".
Is there any way to have proxy_next_upstream differentiate between
connection timeout and read timeout?
If not, is creating a ticket in
the bug tracker the proper way to request such a feature?
From gary.wilson
2 22:11:59 2011
From: gary.wilson
(Gary Wilson Jr.)
Date: Sun, 2 Oct :59 -0500
Subject: number of upstream definitions, and a suggestion
Message-ID:
I currently have maybe a couple hundred upstream definitions but am
likely to grow to thousands.
Are there any practical or theoretical
limits to the number of upstream definitions that can exist in one
nginx instance?
Anyone out there making use of this many upstream
definitions?
On a side note, I think it would be cool if upstream identifiers could
be combined with port numbers so that a single upstream definition
that defined servers could be combined with multiple proxy_pass
definitions that only differ by port.
For example, a configuration
like this:
upstream app1 {
server server1:port1;
server server2:port1;
server server3:port1;
upstream app2 {
server server1:port2;
server server2:port2;
server server3:port2;
location /app1 { proxy_pass http://app1; }
location /app2 { proxy_pass http://app2; }
Could be replaced by a setup like this:
upstream cluster1 {
server server1;
server server2;
server server3;
location /app1 { proxy_pass http://cluster1:port1; }
location /app2 { proxy_pass http://cluster1:port2; }
The latter would require 1 upstream definition and N proxy_pass
definitions instead of N upstream definitions (where the list of
servers doesn't change, only the port numbers) and N proxy_pass
definitions.
From nginx-forum at nginx.us
3 13:12:15 2011
From: nginx-forum at nginx.us (dougconran)
Date: Mon, 03 Oct :15 -0400
Subject: Question regarding Nginx Configuration
In-Reply-To:
References:
Message-ID:
Igor Sysoev Wrote:
-------------------------------------------------------
> Locations, Directories, and other blocks in the
> global server
> configuration is one of features I never liked in
> Apache, so
> this is reason why they were not implemented in
> Igor Sysoev
I guess that is certainly enough reason not to include them in Nginx -
and thank you for making a very good alternative available, I certainly
couldn't do any better!
My question, originally, was about whether I had correctly understood
how to con or whether it could be done more
efficiently and I had not understood how to do that.
However, my config works and it was a lot easier than Apache - maybe I
should just count my blessings.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,210#msg-216210
From nginx-forum at nginx.us
3 14:01:53 2011
From: nginx-forum at nginx.us (firestorm)
Date: Mon, 03 Oct :53 -0400
Subject: Problem with GZIP
In-Reply-To:
References:
Message-ID:
Using firefox?s plg-in firebug I can check gzip support it?s not
There are 8 plain text components that should be sent compressed
http://10.128.50.101/css/principal.css
http://10.128.50.101/css/blueprint/screen.css
http://10.128.50.101/css/learning/learningp.css
http://10.128.50.101/css/learning/learningm.css
http://10.128.50.101/css/blueprint/plugins/dashboard/buttons.css
http://10.128.50.101/css/learning/lear_dashboard.css
http://10.128.50.101/sfJqueryReloadedPlugin/css/plugins/jquery.fancybox.css
http://10.128.50.101/js/js_c7b7caad6e7bad82bfa9d21a.js
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,213#msg-216213
From nginx-forum at nginx.us
3 14:04:16 2011
From: nginx-forum at nginx.us (firestorm)
Date: Mon, 03 Oct :16 -0400
Subject: Problem with GZIP
In-Reply-To:
References:
Message-ID:
07:44:01 [debug] 14365#0: *1 write new buf t:1 f:0 ,
pos , size: 366 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 http write filter: l:0 f:0
07:44:01 [debug] 14365#0: *1 http cacheable: 0
07:44:01 [debug] 14365#0: *1 http upstream process upstream
07:44:01 [debug] 14365#0: *1 pipe read upstream: 1
07:44:01 [debug] 14365#0: *1 pipe preread: 3873
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 01
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 06
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 01
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 99
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record length:
07:44:01 [debug] 14365#0: *1 input buf #0 086E16B0
07:44:01 [debug] 14365#0: *1 input buf 086E16B0 3864
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #1
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #2
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #3
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #4
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #5
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #6
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #7
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 readv: 1:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 4096
07:44:01 [debug] 14365#0: *1 input buf #8
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 pipe downstream ready
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
086E15C8, pos 086E16B0, size: 3864 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf in
s:1 t:1 f:0
, pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe length: -1
07:44:01 [debug] 14365#0: *1 pipe write downstream: 1
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1 086E16B0
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write: out:, f:1
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 malloc: B76F
07:44:01 [debug] 14365#0: *1 gzip alloc: n:1 s:5824 a:8192
p:B76F1008
07:44:01 [debug] 14365#0: *1 gzip alloc: n:32768 s:2 a:65536
p:B76F3008
07:44:01 [debug] 14365#0: *1 gzip alloc: n:32768 s:2 a:65536
p:B7703008
07:44:01 [debug] 14365#0: *1 gzip alloc: n:32768 s:2 a:65536
p:B7713008
07:44:01 [debug] 14365#0: *1 gzip alloc: n:16384 s:4 a:65536
p:B7723008
07:44:01 [debug] 14365#0: *1 gzip in: 0872BC84
07:44:01 [debug] 14365#0: *1 gzip in_buf: ni:086E16B0
07:44:01 [debug] 14365#0: *1 malloc: 6
07:44:01 [debug] 14365#0: *1 deflate in: ni:086E16B0
no: ai:3864 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:086E25C8
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:
pos:086E16B0
07:44:01 [debug] 14365#0: *1 gzip in: 0872BC8C
07:44:01 [debug] 14365#0: *1 gzip in_buf: ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:
07:44:01 [debug] 14365#0: *1 gzip in:
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write: out:, f:1
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?" 0872BCD0
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 gzip in: 0872BCE0
07:44:01 [debug] 14365#0: *1 gzip in_buf: ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:
07:44:01 [debug] 14365#0: *1 gzip in: 0872BCE8
07:44:01 [debug] 14365#0: *1 gzip in_buf: ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:
07:44:01 [debug] 14365#0: *1 gzip in:
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write: out:0872BA54, f:1
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?" 0872BCF0
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD00
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BA20 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BA20
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD08
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BA98 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BA98
07:44:01 [debug] 14365#0: *1 gzip in:
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write: out:0872BB44, f:1
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?" 0872BD10
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD20
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BB10 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BB10
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD28
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BB88 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BB88
07:44:01 [debug] 14365#0: *1 gzip in:
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write buf ls:1
07:44:01 [debug] 14365#0: *1 pipe write: out:0872BC34, f:0
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?" 0872BC34
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD30
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BC00 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:4096 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BC00
07:44:01 [debug] 14365#0: *1 gzip in:
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write: out:, f:0
07:44:01 [debug] 14365#0: *1 pipe read upstream: 1
07:44:01 [debug] 14365#0: *1 readv: 9:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 2536
07:44:01 [debug] 14365#0: *1 readv: 9:4096
07:44:01 [debug] 14365#0: *1 readv() not ready (11: Resource
temporarily unavailable)
07:44:01 [debug] 14365#0: *1 pipe recv chain: -2
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 2536 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
086E15C8, pos 086E15C8, size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe length: -1
07:44:01 [debug] 14365#0: *1 pipe write downstream: 1
07:44:01 [debug] 14365#0: *1 pipe write busy: 0
07:44:01 [debug] 14365#0: *1 pipe write: out:, f:0
07:44:01 [debug] 14365#0: *1 pipe read upstream: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 2536 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
086E15C8, pos 086E15C8, size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe length: -1
07:44:01 [debug] 14365#0: *1 event timer del: 12:
07:44:01 [debug] 14365#0: *1 event timer add: 12:
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: worker cycle
07:44:01 [debug] 14365#0: accept mutex locked
07:44:01 [debug] 14365#0: epoll timer: 60000
07:44:01 [debug] 14365#0: epoll: fd:12 ev:0004 d:086F8655
07:44:01 [debug] 14365#0: *1 post event
07:44:01 [debug] 14365#0: timer delta: 11
07:44:01 [debug] 14365#0: posted events
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: *1 delete posted event
07:44:01 [debug] 14365#0: *1 http upstream request:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http upstream dummy handler
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: worker cycle
07:44:01 [debug] 14365#0: accept mutex locked
07:44:01 [debug] 14365#0: epoll timer: 59989
07:44:01 [debug] 14365#0: epoll: fd:12 ev:0005 d:086F8655
07:44:01 [debug] 14365#0: *1 post event 087115CC
07:44:01 [debug] 14365#0: *1 post event
07:44:01 [debug] 14365#0: timer delta: 40
07:44:01 [debug] 14365#0: posted events
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: *1 delete posted event
07:44:01 [debug] 14365#0: *1 http upstream request:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http upstream dummy handler
07:44:01 [debug] 14365#0: posted event 087115CC
07:44:01 [debug] 14365#0: *1 delete posted event 087115CC
07:44:01 [debug] 14365#0: *1 http upstream request:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http upstream process upstream
07:44:01 [debug] 14365#0: *1 pipe read upstream: 1
07:44:01 [debug] 14365#0: *1 readv: 9:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 16
07:44:01 [debug] 14365#0: *1 readv: 9:4096
07:44:01 [debug] 14365#0: *1 pipe recv chain: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 2552 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
086E15C8, pos 086E15C8, size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe buf free s:0 t:1 f:0
, pos , size: 0 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 pipe length: -1
07:44:01 [debug] 14365#0: *1 input buf #9
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 01
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 03
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 01
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 08
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record byte: 00
07:44:01 [debug] 14365#0: *1 http fastcgi record length: 8
07:44:01 [debug] 14365#0: *1 http fastcgi sent end request
07:44:01 [debug] 14365#0: *1 input buf 6
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free: 086E15C8
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 pipe write downstream: 1
07:44:01 [debug] 14365#0: *1 pipe write downstream flush in
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?" 0872BC34
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD40
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BC00 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:2536 ao:4096 fl:0 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:08733F68
no: ai:0 ao:4096 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BC00
07:44:01 [debug] 14365#0: *1 gzip in:
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 pipe write downstream done
07:44:01 [debug] 14365#0: *1 event timer: 12, old:
07:44:01 [debug] 14365#0: *1 http upstream exit:
07:44:01 [debug] 14365#0: *1 finalize http upstream request:
07:44:01 [debug] 14365#0: *1 finalize http fastcgi request
07:44:01 [debug] 14365#0: *1 free rr peer 1 0
07:44:01 [debug] 14365#0: *1 close http upstream connection:
07:44:01 [debug] 14365#0: *1 free: 086DA5B0, unused: 88
07:44:01 [debug] 14365#0: *1 event timer del: 12:
07:44:01 [debug] 14365#0: *1 reusable connection: 0
07:44:01 [debug] 14365#0: *1 http upstream temp fd: -1
07:44:01 [debug] 14365#0: *1 http output filter
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/administration.php/sf_guard_group?" BFEEF108
07:44:01 [debug] 14365#0: *1 http gzip filter
07:44:01 [debug] 14365#0: *1 gzip in: 0872BD7C
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BD48 ni:
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no: ai:0 ao:4096 fl:4 redo:0
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no: ai:0 ao:0 rc:0
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BD48
07:44:01 [debug] 14365#0: *1 malloc: 086E15C8:4096
07:44:01 [debug] 14365#0: *1 deflate in: ni:
no:086E15C8 ai:0 ao:4096 fl:4 redo:1
07:44:01 [debug] 14365#0: *1 deflate out: ni:
no:086E1DAD ai:0 ao:2075 rc:1
07:44:01 [debug] 14365#0: *1 gzip in_buf:0872BD48
07:44:01 [debug] 14365#0: *1 free: B76F1008
07:44:01 [debug] 14365#0: *1 http chunk: 10
07:44:01 [debug] 14365#0: *1 http chunk: 4096
07:44:01 [debug] 14365#0: *1 http chunk: 2029
07:44:01 [debug] 14365#0: *1 write old buf t:1 f:0 ,
pos , size: 366 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 write new buf t:1 f:0 0872BE58,
pos 0872BE58, size: 6 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 write new buf t:0 f:0 ,
pos 080CAFE4, size: 10 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 write new buf t:1 f:0 ,
pos , size: 4096 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 write new buf t:1 f:0 086E15C8,
pos 086E15C8, size: 2029 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 write new buf t:0 f:0 ,
pos 080BF4DC, size: 7 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 http write filter: l:1 f:1
07:44:01 [debug] 14365#0: *1 http write filter limit 0
07:44:01 [debug] 14365#0: *1 writev: 6514
07:44:01 [debug] 14365#0: *1 http write filter
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/administration.php/sf_guard_group?"
07:44:01 [debug] 14365#0: *1 http finalize request: 0,
"/administration.php/sf_guard_group?" a:1, c:1
07:44:01 [debug] 14365#0: *1 set http keepalive handler
07:44:01 [debug] 14365#0: *1 http close request
07:44:01 [debug] 14365#0: *1 http log handler
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free: 086E15C8
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free:
07:44:01 [debug] 14365#0: *1 free: 086E05C0, unused: 0
07:44:01 [debug] 14365#0: *1 free: , unused: 1426
07:44:01 [debug] 14365#0: *1 event timer add: 11:
07:44:01 [debug] 14365#0: *1 free: 086E0310
07:44:01 [debug] 14365#0: *1 free: 086DFF08
07:44:01 [debug] 14365#0: *1 hc free:
07:44:01 [debug] 14365#0: *1 hc busy:
07:44:01 [debug] 14365#0: *1 reusable connection: 1
07:44:01 [debug] 14365#0: *1 post event
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: *1 delete posted event
07:44:01 [debug] 14365#0: *1 http keepalive handler
07:44:01 [debug] 14365#0: *1 malloc: 086DFF08:1024
07:44:01 [debug] 14365#0: *1 recv: fd:11 -1 of 1024
07:44:01 [debug] 14365#0: *1 recv() not ready (11: Resource
temporarily unavailable)
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: worker cycle
07:44:01 [debug] 14365#0: accept mutex locked
07:44:01 [debug] 14365#0: epoll timer: 5000
07:44:01 [debug] 14366#0: timer delta: 499
07:44:01 [debug] 14366#0: posted events
07:44:01 [debug] 14366#0: worker cycle
07:44:01 [debug] 14366#0: accept mutex lock failed: 0
07:44:01 [debug] 14366#0: epoll timer: 500
07:44:01 [debug] 14367#0: timer delta: 500
07:44:01 [debug] 14367#0: posted events
07:44:01 [debug] 14367#0: worker cycle
07:44:01 [debug] 14367#0: accept mutex lock failed: 0
07:44:01 [debug] 14367#0: epoll timer: 500
07:44:01 [debug] 14368#0: timer delta: 500
07:44:01 [debug] 14368#0: posted events
07:44:01 [debug] 14368#0: worker cycle
07:44:01 [debug] 14368#0: accept mutex lock failed: 0
07:44:01 [debug] 14368#0: epoll timer: 500
07:44:01 [debug] 14365#0: epoll: fd:11 ev:0005 d:086F85F0
07:44:01 [debug] 14365#0: *1 post event
07:44:01 [debug] 14365#0: *1 post event
07:44:01 [debug] 14365#0: timer delta: 197
07:44:01 [debug] 14365#0: posted events
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: *1 delete posted event
07:44:01 [debug] 14365#0: *1 http empty handler
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: *1 delete posted event
07:44:01 [debug] 14365#0: *1 http keepalive handler
07:44:01 [debug] 14365#0: *1 recv: fd:11 441 of 1024
07:44:01 [debug] 14365#0: *1 reusable connection: 0
07:44:01 [debug] 14365#0: *1 malloc: 086E
07:44:01 [debug] 14365#0: *1 posix_memalign: 086E05C0:4096
07:44:01 [debug] 14365#0: *1 http process request line
07:44:01 [debug] 14365#0: *1 http request line: "GET
/css/blueprint/print.css HTTP/1.1"
07:44:01 [debug] 14365#0: *1 http uri:
"/css/blueprint/print.css"
07:44:01 [debug] 14365#0: *1 http args: ""
07:44:01 [debug] 14365#0: *1 http exten: "css"
07:44:01 [debug] 14365#0: *1 http process request header
07:44:01 [debug] 14365#0: *1 http header: "Host:
10.128.50.101"
07:44:01 [debug] 14365#0: *1 http header: "User-Agent:
Mozilla/5.0 (Windows NT 5.1; rv:7.0) Gecko/ Firefox/7.0"
07:44:01 [debug] 14365#0: *1 http header: "Accept:
text/css,*/*;q=0.1"
07:44:01 [debug] 14365#0: *1 http header: "Accept-Language:
es-es,q=0.8,en-q=0.5,q=0.3"
07:44:01 [debug] 14365#0: *1 http header: "Accept-Encoding:
gzip, deflate"
07:44:01 [debug] 14365#0: *1 http header: "Accept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7"
07:44:01 [debug] 14365#0: *1 http header: "Connection:
keep-alive"
07:44:01 [debug] 14365#0: *1 http header: "Referer:
http://10.128.50.101/administration.php/sf_guard_group"
07:44:01 [debug] 14365#0: *1 http header: "Cookie:
zera=vmlib90drktmrpmo3gdra6hi12; has_js=1"
07:44:01 [debug] 14365#0: *1 http header done
07:44:01 [debug] 14365#0: *1 event timer del: 11:
07:44:01 [debug] 14365#0: *1 generic phase: 0
07:44:01 [debug] 14365#0: *1 rewrite phase: 1
07:44:01 [debug] 14365#0: *1 test location: "/404.html"
07:44:01 [debug] 14365#0: *1 test location: "/sf/"
07:44:01 [debug] 14365#0: *1 test location: ~
".+\.(js|htc|ico|gif|jpg|png|css)$"
07:44:01 [debug] 14365#0: *1 using configuration
".+\.(js|htc|ico|gif|jpg|png|css)$"
07:44:01 [debug] 14365#0: *1 http cl:-1 max:1048576
07:44:01 [debug] 14365#0: *1 rewrite phase: 3
07:44:01 [debug] 14365#0: *1 post rewrite phase: 4
07:44:01 [debug] 14365#0: *1 generic phase: 5
07:44:01 [debug] 14365#0: *1 generic phase: 6
07:44:01 [debug] 14365#0: *1 generic phase: 7
07:44:01 [debug] 14365#0: *1 access phase: 8
07:44:01 [debug] 14365#0: *1 access phase: 9
07:44:01 [debug] 14365#0: *1 post access phase: 10
07:44:01 [debug] 14365#0: *1 content phase: 11
07:44:01 [debug] 14365#0: *1 content phase: 12
07:44:01 [debug] 14365#0: *1 content phase: 13
07:44:01 [debug] 14365#0: *1 content phase: 14
07:44:01 [debug] 14365#0: *1 content phase: 15
07:44:01 [debug] 14365#0: *1 content phase: 16
07:44:01 [debug] 14365#0: *1 http filename:
"/var/www/appname/web/css/blueprint/print.css"
07:44:01 [debug] 14365#0: *1 add cleanup: 086E0BEC
07:44:01 [debug] 14365#0: *1 http static fd: 12
07:44:01 [debug] 14365#0: *1 http set discard body
07:44:01 [debug] 14365#0: *1 HTTP/1.1 200 OK
Server: nginx/1.1.4
Date: Thu, 29 Sep :01 GMT
Content-Type: text/css
Content-Length: 1285
Last-Modified: Wed, 28 Sep :21 GMT
Connection: keep-alive
Expires: Thu, 31 Dec :55 GMT
Cache-Control: max-age=
Accept-Ranges: bytes
07:44:01 [debug] 14365#0: *1 write new buf t:1 f:0 086E0CFC,
pos 086E0CFC, size: 289 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 http write filter: l:0 f:0
07:44:01 [debug] 14365#0: *1 http output filter
"/css/blueprint/print.css?"
07:44:01 [debug] 14365#0: *1 http copy filter:
"/css/blueprint/print.css?"
07:44:01 [debug] 14365#0: *1 http postpone filter
"/css/blueprint/print.css?" BFEEF03C
07:44:01 [debug] 14365#0: *1 write old buf t:1 f:0 086E0CFC,
pos 086E0CFC, size: 289 file: 0, size: 0
07:44:01 [debug] 14365#0: *1 write new buf t:0 f:1 ,
pos , size: 0 file: 0, size: 1285
07:44:01 [debug] 14365#0: *1 http write filter: l:1 f:0
07:44:01 [debug] 14365#0: *1 http write filter limit 0
07:44:01 [debug] 14365#0: *1 writev: 289
07:44:01 [debug] 14365#0: *1 sendfile: @0 1285
07:44:01 [debug] 14365#0: *1 sendfile: 1285, @0
07:44:01 [debug] 14365#0: *1 http write filter
07:44:01 [debug] 14365#0: *1 http copy filter: 0
"/css/blueprint/print.css?"
07:44:01 [debug] 14365#0: *1 http finalize request: 0,
"/css/blueprint/print.css?" a:1, c:1
07:44:01 [debug] 14365#0: *1 set http keepalive handler
07:44:01 [debug] 14365#0: *1 http close request
07:44:01 [debug] 14365#0: *1 http log handler
07:44:01 [debug] 14365#0: *1 run cleanup: 086E0BEC
07:44:01 [debug] 14365#0: *1 file cleanup: fd:12
07:44:01 [debug] 14365#0: *1 free: 086E05C0, unused: 1631
07:44:01 [debug] 14365#0: *1 event timer add: 11:
07:44:01 [debug] 14365#0: *1 free: 086E0310
07:44:01 [debug] 14365#0: *1 free: 086DFF08
07:44:01 [debug] 14365#0: *1 hc free:
07:44:01 [debug] 14365#0: *1 hc busy:
07:44:01 [debug] 14365#0: *1 reusable connection: 1
07:44:01 [debug] 14365#0: *1 post event
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: *1 delete posted event
07:44:01 [debug] 14365#0: *1 http keepalive handler
07:44:01 [debug] 14365#0: *1 malloc: 086DFF08:1024
07:44:01 [debug] 14365#0: *1 recv: fd:11 -1 of 1024
07:44:01 [debug] 14365#0: *1 recv() not ready (11: Resource
temporarily unavailable)
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: worker cycle
07:44:01 [debug] 14365#0: accept mutex locked
07:44:01 [debug] 14365#0: epoll timer: 5000
07:44:01 [debug] 14365#0: epoll: fd:6 ev:0001 d:086F8528
07:44:01 [debug] 14365#0: post event
07:44:01 [debug] 14365#0: timer delta: 10
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: delete posted event
07:44:01 [debug] 14365#0: accept on 0.0.0.0:80, ready: 0
07:44:01 [debug] 14365#0: posix_memalign: 086E
07:44:01 [debug] 14365#0: *4 accept: 10.35.9.129 fd:12
07:44:01 [debug] 14365#0: *4 event timer add: 12:
07:44:01 [debug] 14365#0: *4 epoll add event: fd:12 op:1
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: posted events
07:44:01 [debug] 14365#0: worker cycle
07:44:01 [debug] 14365#0: accept mutex locked
07:44:01 [debug] 14365#0: epoll timer: 4990
07:44:01 [debug] 14365#0: epoll: fd:6 ev:0001 d:086F8528
07:44:01 [debug] 14365#0: post event
07:44:01 [debug] 14365#0: timer delta: 8
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: delete posted event
07:44:01 [debug] 14365#0: accept on 0.0.0.0:80, ready: 0
07:44:01 [debug] 14365#0: posix_memalign: 086E
07:44:01 [debug] 14365#0: *5 accept: 10.35.9.129 fd:13
07:44:01 [debug] 14365#0: *5 event timer add: 13:
07:44:01 [debug] 14365#0: *5 epoll add event: fd:13 op:1
07:44:01 [debug] 14365#0: posted event
07:44:01 [debug] 14365#0: posted events
07:44:01 [debug] 14365#0: worker cycle
07:44:01 [debug] 14365#0: accept mutex locked
07:44:01 [debug] 14365#0: epoll timer: 4982
07:44:01 [debug] 14365#0: epoll: fd:12 ev:0001 d:086F8654
07:44:01 [debug] 14365#0: *4 post event 087115CC
07:44:01 [debug] 14365#0: timer delta: 5
07:44:01 [debug] 14365#0: posted events 087115CC
07:44:01 [debug] 14365#0: posted event 087115CC
07:44:01 [debug] 14365#0: *4 delete posted event 087115CC
07:44:01 [debug] 14365#0: *4 malloc: 086E
07:44:01 [debug] 14365#0: *4 malloc: 086E07F8:1024
07:44:01 [debug] 14365#0: *4 posix_memalign: 086E0C00:4096
07:44:01 [debug] 14365#0: *4 http process request line
07:44:01 [debug] 14365#0: *4 recv: fd:12 458 of 1024
07:44:01 [debug] 14365#0: *4 http request line: "GET
/css/administration/modules/dashboard.css HTTP/1.1"
07:44:01 [debug] 14365#0: *4 http uri:
"/css/administration/modules/dashboard.css"
07:44:01 [debug] 14365#0: *4 http args: ""
07:44:01 [debug] 14365#0: *4 http exten: "css"
07:44:01 [debug] 14365#0: *4 http process request header
07:44:01 [debug] 14365#0: *4 http header: "Host:
10.128.50.101"
07:44:01 [debug] 14365#0: *4 http header: "User-Agent:
Mozilla/5.0 (Windows NT 5.1; rv:7.0) Gecko/ Firefox/7.0"
07:44:01 [debug] 14365#0: *4 http header: "Accept:
text/css,*/*;q=0.1"
07:44:01 [debug] 14365#0: *4 http header: "Accept-Language:
es-es,q=0.8,en-q=0.5,q=0.3"
07:44:01 [debug] 14365#0: *4 http header: "Accept-Encoding:
gzip, deflate"
07:44:01 [debug] 14365#0: *4 http header: "Accept-Charset:
ISO-8859-1,utf-8;q=0.7,*;q=0.7"
07:44:01 [debug] 14365#0: *4 http header: "Connection:
keep-alive"
07:44:01 [debug] 14365#0: *4 http header: "Referer:
http://10.128.50.101/administration.php/sf_guard_group"
07:44:01 [debug] 14365#0: *4 http header: "Cookie:
zera=vmlib90drktmrpmo3gdra6hi12; has_js=1"
07:44:01 [debug] 14365#0: *4 http header done
07:44:01 [debug] 14365#0: *4 event timer del: 12:
07:44:01 [debug] 14365#0: *4 generic phase: 0
07:44:01 [debug] 14365#0: *4 rewrite phase: 1
07:44:01 [debug] 14365#0: *4 test location: "/404.html"
07:44:01 [debug] 14365#0: *4 test location: "/sf/"
07:44:01 [debug] 14365#0: *4 test location: ~
".+\.(js|htc|ico|gif|jpg|png|css)$"
07:44:01 [debug] 14365#0: *4 using configuration
".+\.(js|htc|ico|gif|jpg|png|css)$"
07:44:01 [debug] 14365#0: *4 http cl:-1 max:1048576
07:44:01 [debug] 14365#0: *4 rewrite phase: 3
07:44:01 [debug] 14365#0: *4 post rewrite phase: 4
07:44:01 [debug] 14365#0: *4 generic phase: 5
07:44:01 [debug] 14365#0: *4 generic phase: 6
07:44:01 [debug] 14365#0: *4 generic phase: 7
07:44:01 [debug] 14365#0: *4 access phase: 8
07:44:01 [debug] 14365#0: *4 access phase: 9
07:44:01 [debug] 14365#0: *4 post access phase: 10
07:44:01 [debug] 14365#0: *4 content phase: 11
07:44:01 [debug] 14365#0: *4 content phase: 12
07:44:01 [debug] 14365#0: *4 content phase: 13
07:44:01 [debug] 14365#0: *4 content phase: 14
07:44:01 [debug] 14365#0: *4 content phase: 15
07:44:01 [debug] 14365#0: *4 content phase: 16
07:44:01 [debug] 14365#0: *4 http filename:
"/var/www/appname/web/css/administration/modules/dashboard.css"
07:44:01 [debug] 14365#0: *4 add cleanup: 086E123C
07:44:01 [debug] 14365#0: *4 http static fd: 14
07:44:01 [debug] 14365#0: *4 http set discard body
07:44:01 [debug] 14365#0: *4 HTTP/1.1 200 OK
Server: nginx/1.1.4
Date: Thu, 29 Sep :01 GMT
Content-Type: text/css
Content-Length: 3785
Last-Modified: Wed, 28 Sep :21 GMT
Connection: keep-alive
Expires: Thu, 31 Dec :55 GMT
Cache-Control: max-age=
Accept-Ranges: bytes
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,214#msg-216214
From sjaak23
3 14:39:32 2011
From: sjaak23
(Sjaak Pieterse)
Date: Mon, 3 Oct :32 +0200
Subject: HttpHealthcheckModule server not marked down
In-Reply-To:
References:
Message-ID:
Hey Liseen,
Thank you for teh fix, it's working now for standard round robin, for
upstream_hash it's not working but that's no problem for us we are not
using that in production.
what we often use is gnosek-nginx-upstream-fair, to make it work with
that, can you tell how to handle?
this is what i've done to make it work for now:
peckhardt at test-nginx:~/nginx-1.0.6$ patch -p1 <
../liseen-healthcheck_nginx_upstreams-17298cf/healthcheck.patch
patching file src/http/ngx_http_upstream.c
Hunk #1 succeeded at 4270 (offset 11 lines).
patching file src/http/ngx_http_upstream.h
patching file src/http/ngx_http_upstream_round_robin.c
Hunk #2 succeeded at 25 with fuzz 2 (offset 9 lines).
Hunk #3 succeeded at 33 (offset 9 lines).
Hunk #4 succeeded at 68 (offset 9 lines).
Hunk #5 succeeded at 416 (offset 7 lines).
Hunk #6 succeeded at 448 (offset 7 lines).
Hunk #7 succeeded at 465 (offset 7 lines).
Hunk #8 succeeded at 506 (offset 7 lines).
Hunk #9 succeeded at 523 (offset 7 lines).
Hunk #10 succeeded at 617 (offset 7 lines).
patching file src/http/ngx_http_upstream_round_robin.h
peckhardt at test-nginx:~/nginx-1.0.6$sudo ./configure
--with-http_ssl_module
--add-module=/home/peckhardt/gnosek-nginx-upstream-fair-2131c73
--with-http_stub_status_module
--add-module=/home/peckhardt/liseen-healthcheck_nginx_upstreams-17298cf
--add-module=/home/peckhardt/liseen-nginx_upstream_hash-43fab03
--prefix=/usr/local/nginx-1.0.6 --with-debug
peckhardt at test-nginx:~/nginx-1.0.6$sudo su
peckhardt at test-nginx:~/nginx-1.0.6$make install clean
nginx config:
########### test healthcheck ######
upstream www-health{
server 213.154.235.185 ;
server 213.136.14.13 ;
#hash $request_
#hash_again 1;
healthcheck_
healthcheck_delay 10000 ;
healthcheck_timeout 1000;
healthcheck_failcount 2;
#healthcheck_expected 'I_AM_ALIVE';
#Important: HTTP/1.0
healthcheck_send "GET / HTTP/1.0" 'Host: health.';
> On Sat, Oct 1, 2011 at 7:16 AM, liseen
>> Please try:
>> /liseen/healthcheck_nginx_upstreams/blob/master/healthcheck.patch
>> patch -p1 > ./configure ....
>> if you use healthcheck with upstream hash, please compile with branch
>> support_http_healthchecks of cep21's fork
>> ??/cep21/nginx_upstream_hash/tree/support_http_healthchecks
> if all upstreams' backends are down(healthcheck), ?cep's upstream_hash will
> ignore Healthcheck, ?if it is not you need, Please try:
> ? /liseen/nginx_upstream_hash
> If you find something wrong, ?please open an issue on github. thanks.
>> On Sat, Oct 1, 2011 at 6:06 AM, liseen
>>> It is a bug.
>>> the ngx_upstream_get_peer only check the
?forgot to
>>> check i itself.
>>> I used my nginx patch for healthcheck, ?I have used it in production more
>>> than half a year. I will upload it to my github in some hours.
>>> liseen
>>> On Fri, Sep 23, 2011 at 4:34 AM, Sjaak Pieterse
>>> wrote:
>>>> Hi there,
>>>> i'm trying to use the HttpHealthcheckModule for nginx, but i have some
>>>> troubles with it.
>>>> i have two servers in my upstream, when sabotaging the health for one
>>>> server i see in the status view of healthcheck that the server is
>>>> down(1), but if i go to the website i'm checking i still come out on
>>>> it and see a broken page.
>>>> how can i arrange that the server automatically is marked as down when
>>>> the check fails?
>>>> sorry for my bad english and maybe noob questions.
>>>> config:
>>>> ?upstream www-health{
>>>> ??????? server x.x.x.1 ;
>>>> ??????? server x.x.x.2 ;
>>>> ??? healthcheck_
>>>> ??? healthcheck_delay 10000 ;
>>>> ??? healthcheck_timeout 1000;
>>>> ??? healthcheck_failcount 2;
>>>> ??? #healthcheck_expected 'I_AM_ALIVE';
>>>> ??? #Important: HTTP/1.0
>>>> ??? healthcheck_send "GET / HTTP/1.0" 'Host: health.'
>>>> 'Conection: close' ;
>>>> nginx: nginx version: nginx/1.0.6
>>>> nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
>>>> nginx: TLS SNI support enabled
>>>> nginx: configure arguments: --with-http_ssl_module
>>>> --add-module=/gnosek-nginx-upstream-fair-2131c73
>>>> --with-http_stub_status_module
>>>> --add-module=/cep21-healthcheck_nginx_upstreams-b33a846
>>>> --prefix=/usr/local/nginx-1.0.6 --with-debug
>>>> used:
>>>> peckhardt at test-nginx:~/nginx-1.0.6$patch -p1 >>> /cep21-healthcheck_nginx_upstreams-5fa4bff/nginx.patch
>>>> hope someone would help me.
>>>> greetings
>>>> _______________________________________________
>>>> nginx mailing list
>>>> nginx at nginx.org
>>>> http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
From francis at daoine.org
3 14:53:45 2011
From: francis at daoine.org (Francis Daly)
Date: Mon, 3 Oct :45 +0100
Subject: Problem with GZIP
In-Reply-To:
References:
Message-ID:
On Mon, Oct 03, 2011 at 10:01:53AM -0400, firestorm wrote:
> Using firefox?s plg-in firebug I can check gzip support it?s not
> enabled:
> There are 8 plain text components that should be sent compressed
> http://10.128.50.101/css/principal.css
Your config says
gzip_types text/plain application/
Your output log says
Content-Type: text/css
Copy one big-enough file to a .txt-ending name -- for example copy
dashboard.css to dashboard.txt -- and then try to access that file and
see if it is gzipped.
If it is, then include the right content types in your gzip_types config
and it should Just Work.
If it isn't, then more investigation is needed.
Good luck,
Francis Daly
francis at daoine.org
From r at roze.lv
3 14:58:57 2011
From: r at roze.lv (Reinis Rozitis)
Date: Mon, 3 Oct :57 +0300
Subject: Problem with GZIP
In-Reply-To:
References:
Message-ID:
> Using firefox?s plg-in firebug I can check gzip support it?s not enabled:
> http://10.128.50.101/css/blueprint/screen.css
Your configuration ( http://forum.nginx.org/read.php?2,213#msg-216213 ) is missing text/css from gzip_types. Which is
returned by nginx so the gzip module doesn't do anything:
07:44:01 [debug] 14365#0: *4 http filename: "/var/www/appname/web/css/administration/modules/dashboard.css"
07:44:01 [debug] 14365#0: *4 HTTP/1.1 200 OK
Server: nginx/1.1.4
Date: Thu, 29 Sep :01 GMT
Content-Type: text/css
The line probably should be (add or remove additional document types if needed):
gzip_types
text/plain text/css text/xml application/x-
p.s. Also the IE regex can be simplified:
gzip_disable msie6;
From nginx-forum at nginx.us
3 15:09:47 2011
From: nginx-forum at nginx.us (firestorm)
Date: Mon, 03 Oct :47 -0400
Subject: Problem with GZIP
In-Reply-To:
References:
Message-ID:
My configuration now is:
gzip_min_length 1000;
gzip_comp_level 6;
gzip_proxied expired no-cache no-
gzip_types text/plain application/xml application/x-javascript
#gzip_disable msie6;
and the problem remains.
Posted at Nginx Forum: http://forum.nginx.org/read.php?2,219#msg-216219
From liseen.wan
3 17:53:13 2011
From: liseen.wan
Date: Tue, 4 Oct :13 +0800
Subject: HttpHealthcheckModule server not marked down
In-Reply-To:
References:
Message-ID:
On Mon, Oct 3, 2011 at 10:39 PM, Sjaak Pieterse
> Hey Liseen,
> Thank you for teh fix, it's working now for standard round robin, for
> upstream_hash it's not working but that's no problem for us we are not
> using that in production.
The fail test also add one time to hash_again.
Can you set hash_again
10(greater
than or equal servers's number), try again and
tell me the
The upstream_hash module
for some time.
what we often use is gnosek-nginx-upstream-fair, to make it work with
> that, can you tell how to handle?
Patch the upstream-fair module like round_robin and upstream_hash module.
Maybe these should be a module that contains all of the following features:
RR, Hash, Fair, Health check.
Hope some body will provide such module. I
don't like patching code.
> this is what i've done to make it work for now:
> peckhardt at test-nginx:~/nginx-1.0.6$ patch -p1
../liseen-healthcheck_nginx_upstreams-17298cf/healthcheck.patch
> patching file src/http/ngx_http_upstream.c
> Hunk #1 succeeded at 4270 (offset 11 lines).
> patching file src/http/ngx_http_upstream.h
> patching file src/http/ngx_http_upstream_round_robin.c
> Hunk #2 succeeded at 25 with fuzz 2 (offset 9 lines).
> Hunk #3 succeeded at 33 (offset 9 lines).
> Hunk #4 succeeded at 68 (offset 9 lines).
> Hunk #5 succeeded at 416 (offset 7 lines).
> Hunk #6 succeeded at 448 (offset 7 lines).
> Hunk #7 succeeded at 465 (offset 7 lines).
> Hunk #8 succeeded at 506 (offset 7 lines).
> Hunk #9 succeeded at 523 (offset 7 lines).
> Hunk #10 succeeded at 617 (offset 7 lines).
> patching file src/http/ngx_http_upstream_round_robin.h
> peckhardt at test-nginx:~/nginx-1.0.6$sudo ./configure
> --with-http_ssl_module
> --add-module=/home/peckhardt/gnosek-nginx-upstream-fair-2131c73
> --with-http_stub_status_module
> --add-module=/home/peckhardt/liseen-healthcheck_nginx_upstreams-17298cf
> --add-module=/home/peckhardt/liseen-nginx_upstream_hash-43fab03
> --prefix=/usr/local/nginx-1.0.6 --with-debug
> peckhardt at test-nginx:~/nginx-1.0.6$sudo su
> peckhardt at test-nginx:~/nginx-1.0.6$make install clean
> nginx config:
> ########### test healthcheck ######
upstream www-health{
server 213.154.235.185 ;
server 213.136.14.13 ;
#hash $request_
#hash_again 1;
healthcheck_
healthcheck_delay 10000 ;
healthcheck_timeout 1000;
healthcheck_failcount 2;
#healthcheck_expected 'I_AM_ALIVE';
#Important: HTTP/1.0
healthcheck_send "GET / HTTP/1.0" 'Host: health.';
> > On Sat, Oct 1, 2011 at 7:16 AM, liseen
> >> Please try:
> /liseen/healthcheck_nginx_upstreams/blob/master/healthcheck.patch
> >> patch -p1
>> ./configure ....
> >> if you use healthcheck with upstream hash, please compile with branch
> >> support_http_healthchecks of cep21's fork
> /cep21/nginx_upstream_hash/tree/support_http_healthchecks
> > if all upstreams' backends are down(healthcheck),
cep's upstream_hash
> > ignore Healthcheck,
if it is not you need, Please try:
/liseen/nginx_upstream_hash
> > If you find something wrong,
please open an issue on github. thanks.
> > liseen
> >> liseen
> >> On Sat, Oct 1, 2011 at 6:06 AM, liseen
> >>> It is a bug.
> >>> the ngx_upstream_get_peer only check the
> >>> check i itself.
> >>> I used my nginx patch for healthcheck,
I have used it in production
> >>> than half a year. I will upload it to my github in some hours.
> >>> liseen
> >>> On Fri, Sep 23, 2011 at 4:34 AM, Sjaak Pieterse
> >>> wrote:
> >>>> Hi there,
> >>>> i'm trying to use the HttpHealthcheckModule for nginx, but i have some
> >>>> troubles with it.
> >>>> i have two servers in my upstream, when sabotaging the health for one
> >>>> server i see in the status view of healthcheck that the server is
> >>>> down(1), but if i go to the website i'm checking i still come out on
> >>>> it and see a broken page.
> >>>> how can i arrange that the server automatically is marked as down when
> >>>> the check fails?
> >>>> sorry for my bad english and maybe noob questions.
> >>>> config:
upstream www-health{
server x.x.x.1 ;
server x.x.x.2 ;
healthcheck_
healthcheck_delay 10000 ;
healthcheck_timeout 1000;
healthcheck_failcount 2;
#healthcheck_expected 'I_AM_ALIVE';
#Important: HTTP/1.0
healthcheck_send "GET / HTTP/1.0" 'Host: health.'
> >>>> 'Conection: close' ;
> >>>> nginx: nginx version: nginx/1.0.6
> >>>> nginx: built by gcc 4.4.3 (Ubuntu 4.4.3-4ubuntu5)
> >>>> nginx: TLS SNI support enabled
> >>>> nginx: configure arguments: --with-http_ssl_module
> >>>> --add-module=/gnosek-nginx-upstream-fair-2131c73
> >>>> --with-http_stub_status_module
> >>>> --add-module=/cep21-healthcheck_nginx_upstreams-b33a846
> >>>> --prefix=/usr/local/nginx-1.0.6 --with-debug
> >>>> used:
> >>>> peckhardt at test-nginx:~/nginx-1.0.6$patch -p1
>>>> /cep21-healthcheck_nginx_upstreams-5fa4bff/nginx.patch
> >>>> hope someone would help me.
> >>>> greetings
> >>>> _______________________________________________
> >>>> nginx mailing list
> >>>> nginx at nginx.org
> >>>> http://mailman.nginx.org/mailman/listinfo/nginx
> > _______________________________________________
> > nginx mailing list
> > nginx at nginx.org
> > http://mailman.nginx.org/mailman/listinfo/nginx
> _______________________________________________
> nginx mailing list
> nginx at nginx.org
> http://mailman.nginx.org/mailman/listinfo/nginx
-------------- next part --------------
An HTML attachment was scrubbed...
From andrew.george.hammond
3 19:04:22 2011
From: andrew.george.hammond
(Andrew Hammond)
Date: Mon, 3 Oct :22 -0700
Subject: upload module parameters issue
Message-ID:
Hello all,
I am trying to implement resumable uploads using the nginx upload module to
a django script. I am running a deb-src build of the nginx 1.0.6 PPA with
the following config, which includes version 2.2.0 of the upload module
(renamed to nginx-upload-resume for consistency).
ahammond at ws-trans02:~$ nginx -V
nginx: nginx version: nginx/1.0.6
nginx: TLS SNI support enabled
nginx: configure arguments:
--prefix=/etc/nginx
--conf-path=/etc/nginx/nginx.conf
--error-log-path=/var/log/nginx/error.log
--http-client-body-temp-path=/var/lib/nginx/body
--http-fastcgi-temp-path=/var/lib/nginx/fastcgi
--http-log-path=/var/log/nginx/access.log
--http-proxy-temp-path=/var/lib/nginx/proxy
--http-scgi-temp-path=/var/lib/nginx/scgi
--http-uwsgi-temp-path=/var/lib/nginx/uwsgi
--lock-path=/var/lock/nginx.lock
--pid-path=/var/run/nginx.pid
--with-debug
--with-http_addition_module
--with-http_dav_module
--with-http_geoip_module
--with-http_gzip_static_module
--with-http_image_filter_module
--with-http_realip_module
--with-http_stub_status_module
--with-http_ssl_module
--with-http_sub_module
--with-http_xslt_module
--with-ipv6
--with-sha1=/usr/include/openssl
--with-md5=/usr/include/openssl
--with-mail
--with-mail_ssl_module
--add-module=/home/ahammond/nginx/nginx-1.0.6/debian/modules/nginx-echo
--add-module=/home/ahammond/nginx/nginx-1.0.6/debian/modules/nginx-upstream-fair
--add-module=/home/ahammond/nginx/nginx-1.0.6/debian/modules/nginx-upload-resume
My config is pretty simple. Uploads connect to the rspool location via a
proxy for resumable uploading, and then upload pass sends them to a fastcgi
that goes to a django script. Details follow.
ahammond at ws-trans02:/etc/nginx$ cat sites-enabled/rspool.conf
upstream transactions {
server localhost:10000;
listen 80;
include common.
root /nutricate/
client_max_body_size 50m;
location /rspool/ {
upload_pass @rspool_
upload_pass_
upload_pass_form_field "^unique_id$|^entity_id$|.*";
include upload_resume_
location @rspool_upload {
include fastcgi_
fastcgi_intercept_
ahammond at ws-trans02:/etc/nginx$ cat common.conf
error_page 500 /500.
error_page 502 /502.
error_page 503 /503.
error_page 504 /504.
location = /50x.html {
ahammond at ws-trans02:/etc/nginx$ cat fastcgi_params
fastcgi_param QUERY_STRING $query_
fastcgi_param REQUEST_METHOD $request_
fastcgi_param CONTENT_TYPE $content_
fastcgi_param CONTENT_LENGTH $content_
#fastcgi_param SCRIPT_FILENAME $request_
fastcgi_param SCRIPT_FILENAME $fastcgi_script_
#fastcgi_param
SCRIPT_NAME $fastcgi_script_
fastcgi_param SCRIPT_NAME '';
fastcgi_param REQUEST_URI $request_
fastcgi_param DOCUMENT_URI $document_
fastcgi_param DOCUMENT_ROOT $document_
fastcgi_param SERVER_PROTOCOL $server_
fastcgi_param GATEWAY_INTERFACE CGI/1.1;
fastcgi_param SERVER_SOFTWARE nginx/$nginx_
fastcgi_param REMOTE_ADDR $remote_
fastcgi_param REMOTE_PORT $remote_
fastcgi_param SERVER_ADDR $server_
fastcgi_param SERVER_PORT $server_
fastcgi_param SERVER_NAME $server_
# PHP only, required if PHP was built with --enable-force-cgi-redirect
fastcgi_param REDIRECT_STATUS 200;
# weird django requirements
fastcgi_param FCGI $server_
fastcgi_param PATH_INFO $fastcgi_script_
ahammond at ws-trans02:/etc/nginx$ cat upload_resume_params
upload_store /var/lib/nginx/resumable_download 1;
upload_store_access user:r grou

我要回帖

更多关于 新魔教传说1.66 的文章

 

随机推荐