sjh - mountain biking running linux vegan geek spice - mtb / vegan / running / linux / canberra / cycling / etc

Steven Hanley hackergotchi picture Steven
Hanley

About

email: sjh@svana.org

web: https://svana.org/sjh
twitter: https://twitter.com/sjhmtb
instagram: https://instagram.com/sjhmtb

Other online diaries:

Aaron Broughton,
Andrew Pollock,
Anthony Towns,
Chris Yeoh,
Martijn van Oosterhout,
Michael Davies,
Michael Still,
Tony Breeds,

Links:

Linux Weekly News,
XKCD,
Girl Genius,
Planet Linux Australia,
Bilbys,
CORC,

Canberra Weather: forecast, radar.

Subscribe: rss, rss2.0, atom

November
Mon Tue Wed Thu Fri Sat Sun
       
22 23 24
25 26 27 28 29 30  

2024
Months
NovDec

Categories:

Archive by month:

Wed, 06 Apr 2011

Connection limiting in Apache2 - 16:01
Yesterday I noticed a machine I look after had been getting some form of DOS or similar against it. There are iso images (700 MB files) on the server and there had been a few hundred thousand download requests from different ip addresses to it via the web server.

Looking at the logs it was interesting to note the User-Agent was identical for each request even though it was coming from so many different ip addresses. So I had the situation of needing to limit connections to a certain type of file or an area on disk via apache so as not to have resource starvation and no download blow outs.

Looking around for ways to do this in apache2 there was not a whole lot of options already implemented, some per ip connection limits in one module, some rate limiting in another module, but no way to limit connections to a given Directory, Vhost or Location immediately turned up. Fortunately a few different searches eventually turned up the libapache2-mod-bw package in Debian.

As it says in the package description

This module allows you to limit bandwidth usage on every virtual host
or directory or to restrict the number of simultaneous connections.
This was the solution it seemed, so I read the documentation in the text file in the package, enabled it on the server and got it working.

To get it working pay attention to the bit that says ExtendedStatus needs to be enabled before the LoadModule line. Then you can simply place it in a Directory section in your main config file for a given vhost.

I configured it with the following section

ForceBandWidthModule On
BandWidthModule On

<Directory "/on/disk/location">
	BandWidth "u:BLAHBLAH" 200
	BandWidth all 2000000
	MaxConnection "u:BLAHBLAH" 1
	MaxConnection all 10
</Directory>
Which says if the user agent has the string "BLAHBLAH" in it anywhere limit to 200 bytes per second and later 1 connection allowed from that user agent to this directory. I thought it worth while to put in a limit on all connections to the directory of 10 just in case the user agent changes and it will not starve the machine or max out the link.

Initially I had the limit of 10 without limiting the user agent more and the DOS was simply using up all 10 and thus no one else could connect to and download these items. Fortunately so far this seems to be working and I can monitor it for a few days to see the resultant behaviour of the attack.

Thanks to the module author this seems to work fairly well and was easier than writing a mechanism inside apache2 myself to limit the connections in the manner required.

[/comp/linux] link

Fri, 07 Aug 2009

Tracking down disk accesses - 14:02
In the last few days since a recent reboot (I normally simply put my laptop to sleep) of my laptop I have noticed a lot of disk access noise. One obvious one was approximately every 5 seconds I heard an access. Others were likely to be occurring also.

I started looking around to see how to track down the problem. I have used iostat in the past to give me some details about activity happening. However the problem with that is it does not tell you which process is doing things. Running top also is not any good as it does not identify io and it also does not show cumulative use/hits as iostat and similar tools do.

There is a python program called iotop I have not tried yet, also there are files in /proc now days you can laboriously work your way through to track down some of this information. However while looking around and reading some of the man pages I discovered the existence of pidstat. This program is fantastic. It can display accumulated disk,vm,cpu,thread information on a per process basis. This is a program I have wished I had for years.

So I ran pidstat -d 5 and watched to see what was writing to the disk so often. First I noticed the predictable kjournald. Rather than messing around trying to change commit interval for this I found there is a laptop-mode-tools package I should have had installed on my laptop. I have now installed it and enabled it to operate even when AC power is plugged in and now kjournald seems to be able to go for minutes at a time without needing to write to disk.

Next I noticed xulrunner-stub was writing often and causing the disk to spin up now it was spun down due to laptop_mode. This is firefox (or iceweasel in the debian case). I found details suggesting firefox 3.0.1 onward had an option to decrease the save/fsync regularity and that 3.5 and up should be even better. I installed 3.5.1 from debian experimental and found another page with 28 good firefox tips, one of which actually told me which about:config option to change to decrease the save/sync interval.

So the disk is not always spinning up or constantly accessing now, though there still appear to be a few culprits I could track down more information on in the pidstat output. Also I may want to play around with more proc settings such as /proc/sys/vm/dirty_expire_centisecs which can change how often pdflush sends stuff to disk and there are other suggestions around which if I think about may help too. Update: I also have since first writing this found a good Linux Journal article on what laptop mode does.

One of the reasons I am so excited about pidstat is it helps with my work a lot, if there is a problem with a mail server, or student login server or any number of other machines. Getting a read out of this information by process accumulated over time is really useful to work out what is causing issues and thus work on controlling and preventing problems.

[/comp/linux] link

Wed, 08 Jul 2009

Success with WPA2 - 17:58
After spending far more time than I should have I have finally found a working configuration for the ANU WPA2 secure wireless on Linux. I spent a lot of time reading seemingly endless Ubuntu forum posts going on about problems that could be wpa supplicant, network-manager or kernel driver based issues. Bugs concerning various complaints were being assigned to any one of those three things.

Due to concerns that my driver for the iwlagn driver could be bad I upgraded my laptop kernel to the Debian sid 2.6.30 packages, I also then downloaded the latest wireless kernel drivers and installed them. Also the three programs mentioned, iw (new interface to wireless stack in Linux), crda and wireless-regdb.

Eventually I am not entirely convinced those things helped, many forum complaints for Ubuntu and other systems said network-manager had issues and to try wicd. My initial efforts with wicd failed. Eventually while reading some efforts someone else had made to work out what was happening on their system I saw someone using the rather simple iwlist tool to scan for the capabilities of the secure access points.

When I did this I notice the ANU-Secure access points all advertised the following.

IE: IEEE 802.11i/WPA2 Version 1
    Group Cipher : CCMP
    Pairwise Ciphers (1) : CCMP
    Authentication Suites (1) : 802.1x

I had previously been trying TKIP and WPA2 when I tried wpa_supplicant alone without a manager on top. WPA2 and RSN are aliases for each other in this instance. Anyway with the new drivers and the sid wpa_supplicant I was able to get a wpa_supplicant.conf with the following to work on ANU-Secure.

ctrl_interface=/var/run/wpa_supplicant
ctrl_interface_group=root

network={
   ssid="ANU-Secure"
   scan_ssid=0
   proto=RSN
   key_mgmt=WPA-EAP
   eap=PEAP
   pairwise=CCMP
   group=CCMP
   identity="u9999999"
   password="PASSWORD"
   phase1="peaplabel=0"
   phase2="auth=MSCHAPV2"
#   priority=1
}

Then I looked through the wicd templates for one that had the minimum needed and noticed the wicd PEAP-GTC template had the desired fields set. So now in wicd I can access ANU-Secure from the desktop with no problems. I really should test out older drivers and some other configurations, also try out network manager again I think. Works for now though, I can finally stop wasting so much time on this.

[/comp/linux] link

Thu, 02 Jul 2009

A regression for WPA2 - 18:20
So for a while I was wondering why I could not use the ANU's WPA2 secure network from my laptop. I had heard reports that some Ubuntu hardy machines had worked. I run Debian unstable and a kernel.org 2.6.29.3 on this laptop.

I thought maybe there was some problem with my laptop hardware and maybe the iwl4965 chipset simply would not do it under Linux. However searching online suggested I should be able to make it do WPA2.

Thinking maybe the Ubuntu people had done it right and Debian was missing something I tried booting a Jaunty live cd. I also discovered the rather neat feature of suspend to disk (hibernate) in that you can hibernate your computer, boot off a live cd, use it, reboot and have your existing session come right back up normally on the next boot.

Anyway I booted up Jaunty and tried to authenticate, still failed in a similar manner to my Debian installation. Out of curiosity as I had heard of hardy working I booted my laptop on a hardy live cd. So network manager and iwlagn driver combined on either Debian sid or Ubuntu jaunty had failed to authenticate. Ubuntu hardy on the other hand, using an older version of network manager and the iwl4965 driver in the kernel worked fine. WPA2 authentication and use on the ANU Secure wireless network.

So now I need to find out where the regression has happened that means WPA2 is broken in more recent releases of the software (kernel drivers, wpa supplicant, network manager) on either Debian or Ubuntu.

[/comp/linux] link

Thu, 29 May 2008

Some system config updates - 15:39
So I have been using xterm as my default terminal for years, however on Wednesday morning when Tony noticed this he suggested I should look at gnome-terminal as it has some advantages such as ctrl click url loading. I could not however get my font (the default system fixed size 10) to look right or be sized correctly in gnome-terminal, even though in xterms it looked fine.

After lots of mucking around with fontconfig and other things trying to track down the issue, Tony suggested I look at the resolution for fonts in GNOME System -> Preferences -> Appearance :: Fonts :: Details wondering what my DPI for fonts was set to. His was set to 96, mine however was at 112. So I changed this and all of a sudden the font in gnome-terminal could look identical to my xterm fixed font. Rock on, something I should share with the world here in case it comes up for others. Getting the font size right in the terminal application is important as my brain is so used to a certain look there.

On another note I should probably stop bagging the nvidia setup as much as I have been, sure it is a pain I can not use xrandr commands to automatically do funky stuff in a scripted environment, however I can at least use the gooey tool nvidia-settings to do the stuff I want, even if it is not as nice as doing things automatically. Still it sure would be nice if nvidia opened up and allowed open source development with full specs to the hardware. If this laptop had been available with the Intel chipset I would have specced it with that for sure.

[/comp/linux] link

Thu, 01 May 2008

Another Ubuntu annoyance - 22:03
I was bitten once more today by Ubuntu forcing the use of UUIDs for disk labels (in grub and other places). We have a lot of systems at work (student labs) where we update or synchronise them with rsync rather than some install mechanism such as cfengine and fai. Thus if a grub menu.lst or an fstab is copied over and not automatically modified a machine will not boot if it uses uuid for a disk label.

Unfortunately in Ubuntu there is no way to disable this in grub, the uuid change is hard coded into update-grub in /usr/sbin. At least in Debian it is still optional. Anyway I had forgotten to modify update-grub to remove the uuid stuff and had installed a new kernel on a student server, then reboot the machine and hey presto it did not come back online.

If it were not for the need to run this server on Ubuntu to be similar to the lab image and easy environment for a student to duplicate at home it would be so much easier to run Debian on it again. Of course to compound the issue this was a server I had to wait until after normal hours to take offline so I was messing around with after 7pm.

[/comp/linux] link

Mon, 28 Apr 2008

Update on deb package archive clearing. - 14:44
In response to my last post, In email and online a few people have suggested using file:// URI's in sources.list as that stops apt from using the cache. That would indeed fix the problem for the one machine I was talking about in the post (the mirror itself) however I should admit I had also been thinking about it with respect to all the desktops and servers and such that use Debian or Ubuntu in the department here at work.

They all have a 100 Mbit (or better) link to the mirror, and it seems silly to have them using local disk storage once an entire successful apt run is finished. Andrew suggested the Dpkg::Post-Invoke rule could be used to run apt-get clean, my understanding upon reading the documentation last week was that would run clean after every individual deb package as installed. I guess it is likely when installing large numbers it may not be run until after the post-inst script, however without looking close it appeared to me it may mess up install processes somehow. I may have gotten that intuition wrong, however as pointed out in the other online response it will not work for some use cases.

It still seems the only current way to solve this is to add apt-get clean to cron (or of course write a patch for apt that allows a Apt::Install-Success::Post method or something), not really a huge problem for now, however as I said strangely different to dselect and my expected capabilities.

[/comp/linux] link

Wed, 23 Apr 2008

Keeping /var/cache/apt/archives empty. - 13:02
On mirror.linux.org.au I noticed we stored packages in /var/cache/apt/archives. I think this is somewhat silly considering the machine is a full debian mirror (it is ftp.au.debian.org) (okay so we do not have security updates on there, but that is not a big download cost).

So I had a look at the apt.conf and apt-get documentation and /usr/share/doc/apt/examples/configure-index.gz and a bit of a look around online to see how to disable the cache. I thought it may be bad to completely disable the directory for packages to sit as apt places them there when it downloads them. However as the partial directory being used for packages in transit I wondered if that was where packages were kept during the install process.

Anyway I tried adding Dir::Cache::Archive ""; and Dir::Cache::pkgcache ""; to a new file /etc/apt/apt.conf.d/10pkgcache. This however did not change anything and packages were still left in the archive. Next I tried setting both items to /dev/null, that caused a bus error when running apt-get install. I was kind of hoping there was some way to tell apt not to store files after it has run, dselect runs apt-get clean upon completion, there appears to be no way to tell apt to do a post install hook and run clean when finished. (assuming apt ran with no errors in the case the post install hook runs)

The only way to do this appears to be to place apt-get clean in a crontab somewhere, which is a pain if you are short on disk space so would like to get rid of packages as soon as installing is finished. Interestingly /dev/null was also changed by what I tried above, it became a normal file and I caused some other processes depending on it to fail. Restarting udev did not recreate the device (even though the udev config said to recreate it as a char device with the correct permissions set) instead it reappeared as a normal file with the wrong permissions, some other running process seems to have interfered with /dev/null creation. Anyway that was easily fixed with /bin/mknod, now if only the emptying of /var/cache/apt/archives were so easy without resorting to cron.

[/comp/linux] link

Thu, 21 Feb 2008

X and KDE out of sync - 17:59
So a new Dell Latitude D430 one of the academics at work has was showing some problems with getting X to work as we wanted. It is now running Gutsy, which seemed to not pick up on the intel video driver at first when I removed the i810 driver. However the more annoying thing I found in this setup is that when there is no xorg.conf kdm works fine, however kde reverts to some lower resolution. Although I can change that with xrandr, if I try going into kde display resolution settings they do not work if there is no xorg.conf.

In the last while the Xorg crew have been doing some great work to ensure X will generally run better with no config file around, working things out as it starts up and all that. However kde (at least the version in Kubuntu 7.10) has not caught up to the idea of querying the X server or working with it to that extent yet.

I hope the newer kde releases are heading this way, also I should check out gnome and see if it handles this cleaner. One thing I should note though is xrandr really is seriously cool. I found the thinkwiki xrandr page to be one of the best for describing cool stuff it can do.

[/comp/linux] link

Wed, 09 May 2007

Silent G - 15:41
I commented to jdub and a few others it is sort of a shame Ubuntu releases are named with the same first letter of both words, with the next release named "Gusty Gibbon". As they are bringing in the Gibbon it would be so much better to call it Funky Gibbon in reference to The Goodies.

However when I mentioned this problem to Bob he had a rather brilliant suggestion, they should have used a silent G as is used in most open source recursive acronyms derived from the letters GNU. (GNU itself, Gnome, etc)

Just think Ubuntu GFunky Gibbon.

And for the bad pun lovers out there, I bet you can't wait until your UGG Boots.

[/comp/linux] link

Tue, 27 Feb 2007

Times when you wish etch were stable - 11:21
I really can not whine about this as I do not do work to fix it myself, however I tried to install sarge (3.1r5) on a recent dell machine (Dimension C521) for something I was working on yesterday. First the cd would not see the cd drive or the hard disk. So I found a sarge install cd someone had created with a 2.6.20 kernel and that worked. However the next hurdle, once packages were installed was that X did not just work with the nvidia card (nv free driver in 4.3.0) in the machine.

At this point I could either try testing/etch or install from a dapper cd I had sitting in the office. As it would save burning an etch/testing cd (which we may need rc2 for a clean install anyway?) I ended up installing dapper. At least I can still use the debian packages if need be, however I am definitely looking forward to etch being stable so it will work on more recent hardware for a while.

I guess the argument could be made I should have simply used etch, and if I am going to complain at all I should get off my arse and do work on debian to help get it out the door. Ahh well, machine is up and running now and I can get the work I need done on it.

[/comp/linux] link

Fri, 19 Jan 2007

The kernel hacker culling plan - 11:13

Linus riding along on Geoffrey's Segway (fullsize)
Back in 1994 when Linus visited Andrew Tridgell and Canberra, Tridge took him out to the National Aquarium and tried to kill him off with a bunch of rabid penguins biting him. Then a few years later after an lca Alan Cox came to Canberra for a visit also and Tridge took him horse riding, he fell off a horse, more proof that Tridge is trying to kill off the kernel hackers. I suspect this is a large part of why Linus did not want to come back to Canberra in 2005, apart form having been there before he was wary of being near Tridge on his home turf.

Anyway at lca this year Linus and various other Kernel hackers are in attendance. However because all Kernel hackers are trained to be wary of Tridge at Kernel hacker school Tridge had to get someone else to do the culling effort this time. In this instance it is Geoffrey Bennett with his Open Source/Hardware hand built Segway vehicle.

Geoffrey has mentioned to a few people that if you go too fast on his it can cause a face plant or other problems as it does not yet back off correctly when it hits top motor speed. He will I am sure have that fixed soon, however he left that feature off here so the experienced kernel hacker Segway riders may be tempted to take it for a fast spin and possibly have an incident furthering Tridge's Kernel hacker cull.

[/comp/linux] link

Wed, 01 Nov 2006

Kernel command line for environment variables - 14:56
So installing a debian based system from a network boot server, ie plug in a computer to the network and the debian installer appears (or similar, in this case it is actually ubuntu). Trying to work out how to ensure a proxy would be used for fetching all the files downloaded during an install (debian Packages files, .deb's, etc). The default d-i can still ask you for a proxy, however this one we are using did not.

I remembered reading something somewhere about setting the proxy environment variable on the kernel command line that d-i would then be able to use. I can find no documentation about this with respect to d-i. However it seems to work correctly by putting append="http_proxy=blah" into the correct pxe boot file. AJ pointed out it is a kernel feature that allows variables entered in such a way to be passed to init (this is sort of hinted at in the kernel Documentation/kernel-parameters.txt file, though not made clear). Anyway because d-i uses wget (and even when it gets to apt, apt understands the same variable) to fetch files this works correctly.

[/comp/linux] link

Mon, 05 Jun 2006

Small disks and low memory are not the default case. - 22:26
After yet another 10 GB disk died on one of the computers I installed Ubuntu on for a housemate I noticed I had a reliable seeming 2.5 GB disk sitting around so I put that in and started and install of dapper.

During the install it warned me less than 95% of disk space was available, it did however make it through the install and at the end of the install cleared off a lot of language packs and other items so there was around 320MB of free disk. I rebooted and went to install "easy ubuntu" so my housemate could watch movies or real player files or whatever and it said it would need around 300 MB of disk while doing the install.

I have now removed all the cups and print drivers, all the non Arabic font ttf packages, all the un needed X Display drivers and a bunch of other stuff to recover some more space. Obviously so few computers come with small disks the need to cater for them is dwindling, at least the measly 256 MB of RAM in this system gets by (though slowly), if only there were more RAM slots on the motherboard, I have around 30 128 MB sticks sitting in my office at work doing not much.

Of course I have a 486 dx2 66 with 16 MB RAM and a 420 MB drive sitting around somewhere, I wonder how that would fare? Though if we go that way a whole lot of people could rare up commenting on us youth of today having it so easy compared to the punch cards and ticker tape from the days of yore.

[/comp/linux] link

Fri, 26 May 2006

External VGA on the laptop better now - 15:32
When I got this laptop (Dell X300) back in 2004, the way to get the external VGA display to work under Linux was with a i810switch program, then a while later I had to use the i855crt program to get the external vga to display. These worked (though it meant remembering to use a command when I used a projector or similar for presentations) however I was unable to display XV (overlaid video) or similar output over this (and there were some mouse problems with HW or SW Cursor or something)

I had heard for a while that X.Org had fixed the driver enough to enable continuous output on both from the X Server without these hacks, and the added bonus was there was now a way to get the overlay video reliably. (it could work sometimes with the hacks on some computers). Today I tracked down this page on a thinkpad wiki discussing Intel 855 GM graphics set up under Linux on some laptops. The xorg.conf changes and xvattr command all work fine and I now have better external video with XV available if wanted. Yay.

[/comp/linux] link

Tue, 04 Apr 2006

Linux Australia membership Pants: off - 16:21
So Stewart put out a new release for the memberdb code the other day, I downloaded it to read as I need to use something like this soon and was reading the sources. I was glad to see Stewart had ensured the memberdb code has Pants: off.

echo -e "HEAD http://www.linux.org.au/membership/ HTTP/1.0\r\n\r\n" | nc www.linux.org.au 80 | grep Pants
X-Pants: off
The above courtesy of the header("X-Pants: off") code in both index.php and exportmembers.inc. The Pants feature was also in the 3.1 release of the code I notice. Well done Stewart.

If anyone does not understand this in joke, google for "Jeff Waugh Pants Off" as the Pants off thing is a well known part of Linux Australia and Gnome.

[/comp/linux] link

Mon, 11 Jul 2005

Don't whine about Debian, if you care, fix it. - 16:33
Again on p.d.n this morning I saw a post from David Nusinow, in semi rant mode suggesting anyone who whines about Debian should instead fix stuff. I wholeheartedly agree, if you want something fixed in Debian you can pretty much always get involved in some manner and get it fixed. Okay so sure it may take some effort to change such things as the entire release process or repository layout (ask AJ about the amount of work involved in that sort of thing I guess), however the point stands you can get involved and get stuff fixed.

Some of the comments to David's post suggest the NM holdups are the reason not many people stick around and help. I personally disagree with that, if you feel the need to be classified as a Debian developer to do useful work on Debian. I would look at that as some strange need for status or a dick swinging d.o email address for no apparent reason. At least from the perspective of doing useful work. If you want to create packages of software you use or need it is not particularly difficult to find a maintainer to look over them and officially upload them and all that. On the other hand if you want to do other things to help Debian there is a whole lot that can help with out need for maintainer status.

The biggest gripe a lot of people appear to have is how slow the release process is, there are ways to help with this, the biggest I would suggest is to attempt bug fixes and monitoring bugs.debian.org or better yet with the aim of assisting release readiness the release critical bugs page. If you see something you want to help on, or even if you are not sure, look though some bugs, see if you can duplicate them, work out a solution and provide a fix to the bug if you can. Anyone anywhere can help out with bugs or make an effort to fix things. Doing real helpful work if you care enough is oh so much better than sitting around on Debian Devel whining or arguing about stuff.

I am not really the best person to comment here as I am generally extremely happy with Debian, do not generally do much work toward bug fixes of random software (ie stuff I do not use), however I do not find there is much to complain about with Debian either.

[/comp/linux] link

Fri, 15 Apr 2005

Good ol' stat has been around a while - 21:22
In order to take my mind of other stuff while eating dinner, I took up the challenge presented by Stewart as commented on by Mikal, has the command line utility stat been around for a while in debian?

So I do not have any of my machines still running buzz or rex these days, so I can not simply log in and have a look. I had a look around, as noticed by Stewart stat is in the coreutils package now days, which is part of base. Looking back through some archived debian distributions I can find some traces. In the current sid coreutils changelog.Debian the first entry is from 2002 stating it is a combination of the old base packages textutils, fileutils and shellutils. Those older packages do not appear to contain it, however looking at a Contents file from 2.0 (hamm) (released in 1998 AFAIR) there is a stat program, in the utils section rather than base, in a package named stat.

So it looks like stat has been available for a fairly long while in debian, I also suspect it has been in debian since that time, and in coreutils since the package was created, in the changelog for coreutils the first mention of stat is.

- stat accepts a new file format, %B, for the size of each block reported by %b

Which is dated Mar 2003, as it is not a message along the lines of adding stat to package, I think it has been around for a while and in base at least since 2002. I say, I think, as I can not summon the effort required to track the history of the command in debian simply to suggest Stewart may be being lazy, after all I am lazy too.

[/comp/linux] link

Tue, 01 Feb 2005

The only reason to reboot is a hardware upgrade. - 12:51
Mikal mentioned he only reboots computers when he has to, Mikal linked to a post by Scoble where Scoble says he doesn't find bugs in certain software or systems because he reboots his palm computer every day. Scoble said he learnt this was the best way to do things when working with System 7 Macs back in the 90s and he still does it with computers he works on today.

I would imagine Mikal and I are not the only people who find this behaviour incredible. There is no good reason to reboot a computer in my world view unless you have to do a hardware upgrade (such as replacing the entire computer). Okay admittedly you still need to reboot to upgrade the kernel (if an important kernel security fix must be applied), however that may change (though it is not a kernel developer priority currently). With hot plugging there are instances where you do not even have to shut down a computer to add new hardware or upgrade hardware.

As an example my previous desktop at work had an uptime of around 730 days, from doing an software image install until it was replaced with the new faster hardware we purchased for the next round of deployments. On that machine I currently have an uptime of 363 days. I use this computer every day at work for a whole variety of things, it does not sit in a corner gathering dust. If you need to reboot to avoid bugs, I would suggest using a less buggy operating system.

[/comp/linux] link

Wed, 29 Sep 2004

More on the Linux v Sun discussion - 11:25
This morning I notice in Miguel de Icaza's activity log he makes mention of the Sun and Linux kernel discussion I have talked about Previously. Miguel suggests Greg KH is missing the point. I am not so sure, Greg was not as Miguel suggested arguing with the "everything we do is fine, there is no need to improve" viewpoint. Greg is a well balanced guy and looking at the crap he has dealt with on LKML and other places over the years he definitely seems to understand and respect other viewpoints and will change when a technically correct and superior change is displayed.

Miguel commented about Greg rejecting the Sun guys API stability arguments. I don't know that he rejected them so much as pointed out that that the API is stable in the kernel <-> userspace interface and has been for many years. Kind of like GTK or Mono or something having published API's and having internal structures. There is not much software if any that needs to use internal structures and such with those libraries. In the kernel though if someone has out of tree kernel code it has to keep up with the kernel internal structures. Andrew Morton has talked about this issue at OLS this year as have various other people, code that gets into the kernel will be maintained.

Of course the trick then is getting your code into the kernel, to do this you really need to grok Linux kernel culture and work with it. Mikal pointed out there seem to be exceptions where Linus or others appear arbitrary. Such as FUSE which Mikal suggests wont get into the kernel as Linus thinks it is too close to a Microkernel model. Personally I would hope there are good technical reasons FUSE has not been accepted rather than simply saying all file systems should be implemented entirely in kernel space (after all do we really want GMailFS in kernel space?) Of course Linus is only human (unlike Alan (more Linus quotes)) and has been known to allow code into the kernel in a strange manner in the past. Such as when Dave Miller got the bottom halves stuff in a few years ago. (anyone got a link to something about this I wonder?)

[/comp/linux] link

Fri, 24 Sep 2004

Of course Sun doesn't really get it. - 11:40
Yesterday I commented on some Sun developer noticing the sour grapes attitude of the LTT developers. Interestingly today on LWN there was a link to a diary entry from some Sun engineer going on about reasons he thinks Sun cant use Linux or work with Linux kernel development people. Greg K-H (Linux kernel developer) has a rather good rebuttal to this. Interestingly he pretty much points out that working with the Linux kernel developers on some feature until everyone is sure it can go in to the kernel (will be maintained, is of high quality, that it will in fact be used and useful) is how you go about getting stuff into the kernel. You do not simply put code in because some marketing or management person says it is absolutely necessary.

[/comp/linux] link

Thu, 23 Sep 2004

Sour grapes in kernel coding - 19:36
Sitting in CLUG currently and not really paying attention to the talk, I should probably do some blogging. (well I could do some work, but hey this is different) On Andrew Over's blog today he wrote something about the DTrace features in Solaris 10. Now personally I don't really care about DTrace, however I read his links anyway. It is entertaining to see some comments from one of the Sun engineers about the Linux Trace Toolkit developers.

Basically the LTT developers are whining about LTT not being accepted into the mainline Linux kernel causing the LTT to lag and allow dtrace to be a more advanced technology. I have to agree with the Sun guys here, it seems to be sour grapes. In the case of the Linux kernel you simply need to work with the kernel maintainers the way they wish to work. First provide code and tests or performance data to back up your ideas to prove that some feature should be in the kernel. Then publicly work with the kernel maintainers to integrate your code and ideas in small patches. Do not try to develop elsewhere for some amount of time and then submit a huge monolithic patch then whine when it is rejected.

[/comp/linux] link


home, email, rss, rss2.0, atom