Steven email: sjh@svana.org
web: https://svana.org/sjh Other online diaries:
Aaron Broughton, Links:
Linux Weekly News, Canberra Weather: forecast, radar.
Categories:
|
Wed, 06 Apr 2011
Connection limiting in Apache2 - 16:01
Looking at the logs it was interesting to note the User-Agent was identical for each request even though it was coming from so many different ip addresses. So I had the situation of needing to limit connections to a certain type of file or an area on disk via apache so as not to have resource starvation and no download blow outs. Looking around for ways to do this in apache2 there was not a whole lot of options already implemented, some per ip connection limits in one module, some rate limiting in another module, but no way to limit connections to a given Directory, Vhost or Location immediately turned up. Fortunately a few different searches eventually turned up the libapache2-mod-bw package in Debian. As it says in the package description This module allows you to limit bandwidth usage on every virtual host or directory or to restrict the number of simultaneous connections.This was the solution it seemed, so I read the documentation in the text file in the package, enabled it on the server and got it working. To get it working pay attention to the bit that says ExtendedStatus needs to be enabled before the LoadModule line. Then you can simply place it in a Directory section in your main config file for a given vhost. I configured it with the following section ForceBandWidthModule On BandWidthModule On <Directory "/on/disk/location"> BandWidth "u:BLAHBLAH" 200 BandWidth all 2000000 MaxConnection "u:BLAHBLAH" 1 MaxConnection all 10 </Directory>Which says if the user agent has the string "BLAHBLAH" in it anywhere limit to 200 bytes per second and later 1 connection allowed from that user agent to this directory. I thought it worth while to put in a limit on all connections to the directory of 10 just in case the user agent changes and it will not starve the machine or max out the link. Initially I had the limit of 10 without limiting the user agent more and the DOS was simply using up all 10 and thus no one else could connect to and download these items. Fortunately so far this seems to be working and I can monitor it for a few days to see the resultant behaviour of the attack. Thanks to the module author this seems to work fairly well and was easier than writing a mechanism inside apache2 myself to limit the connections in the manner required. Fri, 07 Aug 2009
Tracking down disk accesses - 14:02
I started looking around to see how to track down the problem. I have used iostat in the past to give me some details about activity happening. However the problem with that is it does not tell you which process is doing things. Running top also is not any good as it does not identify io and it also does not show cumulative use/hits as iostat and similar tools do. There is a python program called iotop I have not tried yet, also there are files in /proc now days you can laboriously work your way through to track down some of this information. However while looking around and reading some of the man pages I discovered the existence of pidstat. This program is fantastic. It can display accumulated disk,vm,cpu,thread information on a per process basis. This is a program I have wished I had for years. So I ran pidstat -d 5 and watched to see what was writing to the disk so often. First I noticed the predictable kjournald. Rather than messing around trying to change commit interval for this I found there is a laptop-mode-tools package I should have had installed on my laptop. I have now installed it and enabled it to operate even when AC power is plugged in and now kjournald seems to be able to go for minutes at a time without needing to write to disk. Next I noticed xulrunner-stub was writing often and causing the disk to spin up now it was spun down due to laptop_mode. This is firefox (or iceweasel in the debian case). I found details suggesting firefox 3.0.1 onward had an option to decrease the save/fsync regularity and that 3.5 and up should be even better. I installed 3.5.1 from debian experimental and found another page with 28 good firefox tips, one of which actually told me which about:config option to change to decrease the save/sync interval. So the disk is not always spinning up or constantly accessing now, though there still appear to be a few culprits I could track down more information on in the pidstat output. Also I may want to play around with more proc settings such as /proc/sys/vm/dirty_expire_centisecs which can change how often pdflush sends stuff to disk and there are other suggestions around which if I think about may help too. Update: I also have since first writing this found a good Linux Journal article on what laptop mode does. One of the reasons I am so excited about pidstat is it helps with my work a lot, if there is a problem with a mail server, or student login server or any number of other machines. Getting a read out of this information by process accumulated over time is really useful to work out what is causing issues and thus work on controlling and preventing problems. Wed, 08 Jul 2009
Success with WPA2 - 17:58
Due to concerns that my driver for the iwlagn driver could be bad I upgraded my laptop kernel to the Debian sid 2.6.30 packages, I also then downloaded the latest wireless kernel drivers and installed them. Also the three programs mentioned, iw (new interface to wireless stack in Linux), crda and wireless-regdb. Eventually I am not entirely convinced those things helped, many forum complaints for Ubuntu and other systems said network-manager had issues and to try wicd. My initial efforts with wicd failed. Eventually while reading some efforts someone else had made to work out what was happening on their system I saw someone using the rather simple iwlist tool to scan for the capabilities of the secure access points. When I did this I notice the ANU-Secure access points all advertised the following.
IE: IEEE 802.11i/WPA2 Version 1 Group Cipher : CCMP Pairwise Ciphers (1) : CCMP Authentication Suites (1) : 802.1x I had previously been trying TKIP and WPA2 when I tried wpa_supplicant alone without a manager on top. WPA2 and RSN are aliases for each other in this instance. Anyway with the new drivers and the sid wpa_supplicant I was able to get a wpa_supplicant.conf with the following to work on ANU-Secure.
ctrl_interface=/var/run/wpa_supplicant ctrl_interface_group=root network={ ssid="ANU-Secure" scan_ssid=0 proto=RSN key_mgmt=WPA-EAP eap=PEAP pairwise=CCMP group=CCMP identity="u9999999" password="PASSWORD" phase1="peaplabel=0" phase2="auth=MSCHAPV2" # priority=1 } Then I looked through the wicd templates for one that had the minimum needed and noticed the wicd PEAP-GTC template had the desired fields set. So now in wicd I can access ANU-Secure from the desktop with no problems. I really should test out older drivers and some other configurations, also try out network manager again I think. Works for now though, I can finally stop wasting so much time on this. Thu, 02 Jul 2009
A regression for WPA2 - 18:20
I thought maybe there was some problem with my laptop hardware and maybe the iwl4965 chipset simply would not do it under Linux. However searching online suggested I should be able to make it do WPA2. Thinking maybe the Ubuntu people had done it right and Debian was missing something I tried booting a Jaunty live cd. I also discovered the rather neat feature of suspend to disk (hibernate) in that you can hibernate your computer, boot off a live cd, use it, reboot and have your existing session come right back up normally on the next boot. Anyway I booted up Jaunty and tried to authenticate, still failed in a similar manner to my Debian installation. Out of curiosity as I had heard of hardy working I booted my laptop on a hardy live cd. So network manager and iwlagn driver combined on either Debian sid or Ubuntu jaunty had failed to authenticate. Ubuntu hardy on the other hand, using an older version of network manager and the iwl4965 driver in the kernel worked fine. WPA2 authentication and use on the ANU Secure wireless network. So now I need to find out where the regression has happened that means WPA2 is broken in more recent releases of the software (kernel drivers, wpa supplicant, network manager) on either Debian or Ubuntu. Thu, 29 May 2008
Some system config updates - 15:39
After lots of mucking around with fontconfig and other things trying to track down the issue, Tony suggested I look at the resolution for fonts in GNOME System -> Preferences -> Appearance :: Fonts :: Details wondering what my DPI for fonts was set to. His was set to 96, mine however was at 112. So I changed this and all of a sudden the font in gnome-terminal could look identical to my xterm fixed font. Rock on, something I should share with the world here in case it comes up for others. Getting the font size right in the terminal application is important as my brain is so used to a certain look there. On another note I should probably stop bagging the nvidia setup as much as I have been, sure it is a pain I can not use xrandr commands to automatically do funky stuff in a scripted environment, however I can at least use the gooey tool nvidia-settings to do the stuff I want, even if it is not as nice as doing things automatically. Still it sure would be nice if nvidia opened up and allowed open source development with full specs to the hardware. If this laptop had been available with the Intel chipset I would have specced it with that for sure. Thu, 01 May 2008
Another Ubuntu annoyance - 22:03
Unfortunately in Ubuntu there is no way to disable this in grub, the uuid change is hard coded into update-grub in /usr/sbin. At least in Debian it is still optional. Anyway I had forgotten to modify update-grub to remove the uuid stuff and had installed a new kernel on a student server, then reboot the machine and hey presto it did not come back online. If it were not for the need to run this server on Ubuntu to be similar to the lab image and easy environment for a student to duplicate at home it would be so much easier to run Debian on it again. Of course to compound the issue this was a server I had to wait until after normal hours to take offline so I was messing around with after 7pm. Mon, 28 Apr 2008
Update on deb package archive clearing. - 14:44
They all have a 100 Mbit (or better) link to the mirror, and it seems silly to have them using local disk storage once an entire successful apt run is finished. Andrew suggested the Dpkg::Post-Invoke rule could be used to run apt-get clean, my understanding upon reading the documentation last week was that would run clean after every individual deb package as installed. I guess it is likely when installing large numbers it may not be run until after the post-inst script, however without looking close it appeared to me it may mess up install processes somehow. I may have gotten that intuition wrong, however as pointed out in the other online response it will not work for some use cases. It still seems the only current way to solve this is to add apt-get clean to cron (or of course write a patch for apt that allows a Apt::Install-Success::Post method or something), not really a huge problem for now, however as I said strangely different to dselect and my expected capabilities. Wed, 23 Apr 2008
Keeping /var/cache/apt/archives empty. - 13:02
So I had a look at the apt.conf and apt-get documentation and /usr/share/doc/apt/examples/configure-index.gz and a bit of a look around online to see how to disable the cache. I thought it may be bad to completely disable the directory for packages to sit as apt places them there when it downloads them. However as the partial directory being used for packages in transit I wondered if that was where packages were kept during the install process. Anyway I tried adding Dir::Cache::Archive ""; and Dir::Cache::pkgcache ""; to a new file /etc/apt/apt.conf.d/10pkgcache. This however did not change anything and packages were still left in the archive. Next I tried setting both items to /dev/null, that caused a bus error when running apt-get install. I was kind of hoping there was some way to tell apt not to store files after it has run, dselect runs apt-get clean upon completion, there appears to be no way to tell apt to do a post install hook and run clean when finished. (assuming apt ran with no errors in the case the post install hook runs) The only way to do this appears to be to place apt-get clean in a crontab somewhere, which is a pain if you are short on disk space so would like to get rid of packages as soon as installing is finished. Interestingly /dev/null was also changed by what I tried above, it became a normal file and I caused some other processes depending on it to fail. Restarting udev did not recreate the device (even though the udev config said to recreate it as a char device with the correct permissions set) instead it reappeared as a normal file with the wrong permissions, some other running process seems to have interfered with /dev/null creation. Anyway that was easily fixed with /bin/mknod, now if only the emptying of /var/cache/apt/archives were so easy without resorting to cron. Thu, 21 Feb 2008
X and KDE out of sync - 17:59
In the last while the Xorg crew have been doing some great work to ensure X will generally run better with no config file around, working things out as it starts up and all that. However kde (at least the version in Kubuntu 7.10) has not caught up to the idea of querying the X server or working with it to that extent yet. I hope the newer kde releases are heading this way, also I should check out gnome and see if it handles this cleaner. One thing I should note though is xrandr really is seriously cool. I found the thinkwiki xrandr page to be one of the best for describing cool stuff it can do. Wed, 09 May 2007
Silent G - 15:41
However when I mentioned this problem to Bob he had a rather brilliant suggestion, they should have used a silent G as is used in most open source recursive acronyms derived from the letters GNU. (GNU itself, Gnome, etc) Just think Ubuntu GFunky Gibbon. And for the bad pun lovers out there, I bet you can't wait until your UGG Boots. Tue, 27 Feb 2007
Times when you wish etch were stable - 11:21
At this point I could either try testing/etch or install from a dapper cd I had sitting in the office. As it would save burning an etch/testing cd (which we may need rc2 for a clean install anyway?) I ended up installing dapper. At least I can still use the debian packages if need be, however I am definitely looking forward to etch being stable so it will work on more recent hardware for a while. I guess the argument could be made I should have simply used etch, and if I am going to complain at all I should get off my arse and do work on debian to help get it out the door. Ahh well, machine is up and running now and I can get the work I need done on it. Fri, 19 Jan 2007
The kernel hacker culling plan - 11:13
Wed, 01 Nov 2006
Kernel command line for environment variables - 14:56
I remembered reading something somewhere about setting the proxy environment variable on the kernel command line that d-i would then be able to use. I can find no documentation about this with respect to d-i. However it seems to work correctly by putting append="http_proxy=blah" into the correct pxe boot file. AJ pointed out it is a kernel feature that allows variables entered in such a way to be passed to init (this is sort of hinted at in the kernel Documentation/kernel-parameters.txt file, though not made clear). Anyway because d-i uses wget (and even when it gets to apt, apt understands the same variable) to fetch files this works correctly. Mon, 05 Jun 2006
Small disks and low memory are not the default case. - 22:26
During the install it warned me less than 95% of disk space was available, it did however make it through the install and at the end of the install cleared off a lot of language packs and other items so there was around 320MB of free disk. I rebooted and went to install "easy ubuntu" so my housemate could watch movies or real player files or whatever and it said it would need around 300 MB of disk while doing the install. I have now removed all the cups and print drivers, all the non Arabic font ttf packages, all the un needed X Display drivers and a bunch of other stuff to recover some more space. Obviously so few computers come with small disks the need to cater for them is dwindling, at least the measly 256 MB of RAM in this system gets by (though slowly), if only there were more RAM slots on the motherboard, I have around 30 128 MB sticks sitting in my office at work doing not much. Of course I have a 486 dx2 66 with 16 MB RAM and a 420 MB drive sitting around somewhere, I wonder how that would fare? Though if we go that way a whole lot of people could rare up commenting on us youth of today having it so easy compared to the punch cards and ticker tape from the days of yore. Fri, 26 May 2006
External VGA on the laptop better now - 15:32
I had heard for a while that X.Org had fixed the driver enough to enable continuous output on both from the X Server without these hacks, and the added bonus was there was now a way to get the overlay video reliably. (it could work sometimes with the hacks on some computers). Today I tracked down this page on a thinkpad wiki discussing Intel 855 GM graphics set up under Linux on some laptops. The xorg.conf changes and xvattr command all work fine and I now have better external video with XV available if wanted. Yay. Tue, 04 Apr 2006
Linux Australia membership Pants: off - 16:21
If anyone does not understand this in joke, google for "Jeff Waugh Pants Off" as the Pants off thing is a well known part of Linux Australia and Gnome. Mon, 11 Jul 2005
Don't whine about Debian, if you care, fix it. - 16:33
Some of the comments to David's post suggest the NM holdups are the reason not many people stick around and help. I personally disagree with that, if you feel the need to be classified as a Debian developer to do useful work on Debian. I would look at that as some strange need for status or a dick swinging d.o email address for no apparent reason. At least from the perspective of doing useful work. If you want to create packages of software you use or need it is not particularly difficult to find a maintainer to look over them and officially upload them and all that. On the other hand if you want to do other things to help Debian there is a whole lot that can help with out need for maintainer status. The biggest gripe a lot of people appear to have is how slow the release process is, there are ways to help with this, the biggest I would suggest is to attempt bug fixes and monitoring bugs.debian.org or better yet with the aim of assisting release readiness the release critical bugs page. If you see something you want to help on, or even if you are not sure, look though some bugs, see if you can duplicate them, work out a solution and provide a fix to the bug if you can. Anyone anywhere can help out with bugs or make an effort to fix things. Doing real helpful work if you care enough is oh so much better than sitting around on Debian Devel whining or arguing about stuff. I am not really the best person to comment here as I am generally extremely happy with Debian, do not generally do much work toward bug fixes of random software (ie stuff I do not use), however I do not find there is much to complain about with Debian either. Fri, 15 Apr 2005
Good ol' stat has been around a while - 21:22
So I do not have any of my machines still running buzz or rex these days, so I can not simply log in and have a look. I had a look around, as noticed by Stewart stat is in the coreutils package now days, which is part of base. Looking back through some archived debian distributions I can find some traces. In the current sid coreutils changelog.Debian the first entry is from 2002 stating it is a combination of the old base packages textutils, fileutils and shellutils. Those older packages do not appear to contain it, however looking at a Contents file from 2.0 (hamm) (released in 1998 AFAIR) there is a stat program, in the utils section rather than base, in a package named stat. So it looks like stat has been available for a fairly long while in debian, I also suspect it has been in debian since that time, and in coreutils since the package was created, in the changelog for coreutils the first mention of stat is.
- stat accepts a new file format, %B, for the size of each block reported by %b Which is dated Mar 2003, as it is not a message along the lines of adding stat to package, I think it has been around for a while and in base at least since 2002. I say, I think, as I can not summon the effort required to track the history of the command in debian simply to suggest Stewart may be being lazy, after all I am lazy too. Tue, 01 Feb 2005
The only reason to reboot is a hardware upgrade. - 12:51
I would imagine Mikal and I are not the only people who find this behaviour incredible. There is no good reason to reboot a computer in my world view unless you have to do a hardware upgrade (such as replacing the entire computer). Okay admittedly you still need to reboot to upgrade the kernel (if an important kernel security fix must be applied), however that may change (though it is not a kernel developer priority currently). With hot plugging there are instances where you do not even have to shut down a computer to add new hardware or upgrade hardware. As an example my previous desktop at work had an uptime of around 730 days, from doing an software image install until it was replaced with the new faster hardware we purchased for the next round of deployments. On that machine I currently have an uptime of 363 days. I use this computer every day at work for a whole variety of things, it does not sit in a corner gathering dust. If you need to reboot to avoid bugs, I would suggest using a less buggy operating system. Wed, 29 Sep 2004
More on the Linux v Sun discussion - 11:25
Miguel commented about Greg rejecting the Sun guys API stability arguments. I don't know that he rejected them so much as pointed out that that the API is stable in the kernel <-> userspace interface and has been for many years. Kind of like GTK or Mono or something having published API's and having internal structures. There is not much software if any that needs to use internal structures and such with those libraries. In the kernel though if someone has out of tree kernel code it has to keep up with the kernel internal structures. Andrew Morton has talked about this issue at OLS this year as have various other people, code that gets into the kernel will be maintained. Of course the trick then is getting your code into the kernel, to do this you really need to grok Linux kernel culture and work with it. Mikal pointed out there seem to be exceptions where Linus or others appear arbitrary. Such as FUSE which Mikal suggests wont get into the kernel as Linus thinks it is too close to a Microkernel model. Personally I would hope there are good technical reasons FUSE has not been accepted rather than simply saying all file systems should be implemented entirely in kernel space (after all do we really want GMailFS in kernel space?) Of course Linus is only human (unlike Alan (more Linus quotes)) and has been known to allow code into the kernel in a strange manner in the past. Such as when Dave Miller got the bottom halves stuff in a few years ago. (anyone got a link to something about this I wonder?) Fri, 24 Sep 2004
Of course Sun doesn't really get it. - 11:40
Thu, 23 Sep 2004
Sour grapes in kernel coding - 19:36
Basically the LTT developers are whining about LTT not being accepted into the mainline Linux kernel causing the LTT to lag and allow dtrace to be a more advanced technology. I have to agree with the Sun guys here, it seems to be sour grapes. In the case of the Linux kernel you simply need to work with the kernel maintainers the way they wish to work. First provide code and tests or performance data to back up your ideas to prove that some feature should be in the kernel. Then publicly work with the kernel maintainers to integrate your code and ideas in small patches. Do not try to develop elsewhere for some amount of time and then submit a huge monolithic patch then whine when it is rejected. |