People have asked me several times why I run 64bit ubuntu. Running & 64bit OS these days still has some problems. You ll want to stay away from the windows flavours and on linux its mostly a smoothe ride, but the catch is in the “mostly” part. Yet 64 Bit Ubuntu works like a dream for me. Everything works with the exception of some quirks and but most of the time they dont bother me that much.
Asfor why I do choose to live with those quirks; there are 2 reasons really.
- Probably the most important: I hate the idea of not fully using my hardware. My fancy lil processor (all several of them) has the space for 64bit computing. The idea of leaving 32 of those bits unused and only using 32 of them because the software is a bit behind on the technology simply irks me. If I wanted to use merely 32 bits, I’d have bought a 32 bit processor. I have a 64 bit processor. I need to use all 64 of those bits to the maximum of their potential. (as far as the current software will allow me to do that.)
Immagine a huge freeway with 32 lanes. After some years, designers figure out that trafic would flow even smoother with the double amount of lanes, so gradually all freeways become 64 lanes. That means the double amount of trafic at the same time. And then (you install a 32 bit system) you decide; those 32 new lanes, we dont need them; we ll just keep using those 32 lanes and ignore those 32 fresh lanes. Doesnt that feel like a waste?
(And for those haters, I do know that I am simplifying the matter here quite a bit, but since I’m about conveying my point here and not about writing a manual about the difference between 32 and 64 bit processors, you ll have to live with that. Those interested in the differences, read up on it on Wikipedia!) - Speed! Those 32 more bits should (logically) be able to speed things up, even if its only marginal. (and even if its only for that software that HAS decently been addapted to 64bit where necessary.) I hadnt really gathered any evidence to this probable speed up and wasnt really looking to do so. Being content with the fact that it should logically make a speed difference & being happy in the ignorance about if any of the software I run does feel the difference.. (yes, ignorance sucks, but Ive got better things to do than checking every bit of software on my desktop computer for performance gain in 32b vs 64b. Marginal increasy in my desktops performance not mission critical to my work as a linux dude.)
Yet now, the fine people at Tuxradar did some 32 bit vs 64 bit benchmarking on Ubuntu 9.04.
And lo and behold, there is now a proven speed up for us 64Bit massochists, thus vallidating my second point. Hooray! To quote the article.. “As we said earlier, it’s a nice bonus. Sure, 5-10% isn’t a lot, but when it’s across your whole desktop and comes at no cost, why not?“
I’m 64 Bit!
And I’ll be installing the new Adobe 64-bit Flash 10 alpha now because ubuntu doesnt want to do it automatically for me. But if you check the speed up; it ll be worth it 😉
I am very aware of the differences between 32 bit and 64 bit CPUs and also of the other differences between the 32 bit and 64 bit incarnations of the Intel Architecture.
While there is performance to be gained under some workloads using 64 bit pointers, most applications benefit more from other aspects of the x86_64 — like extra registers and vector instructions — than from the expanded address space. In fact, using 64 bit pointers can be detrimental in many circumstances.
If an application uses less than 4G of memory, 32 bits of every address are wasted. There is nothing you can do with those bits. Because you’re using them though, they need to be cached. The result of that is that you waste half your cache for all applications that use fewer than 4G of memory.
It would be very useful if gcc could learn something from Sun’s compiler. Sun compiles kernels for 64 bits and userspace for 32 bits — of course taking advantage of all available processor features. If you have a userspace application that needs more than 4G memory, you can compile that particular application to take advantage of them.
LikeLike
I agree on the Linux stuff, but there is no reason to stay clear of the 64 bit Vista version… it’s probably the only version of Windows that runs really smooth and doesn’t lock up… at all!
Been running it for quite a long time now.
(And no I’m not a Linux hater, I run Windows, Linux en Mac machines, I love them all)
LikeLike
I heard that several times since the post so, apparently I was behind on how Windows was doing; for which Ill apologize to MS 🙂 Thanks for the update!
LikeLike
Many people are confused by what 64bit really does.
By itself, 64bit processing does not usually speed up things. The street analogy you give is flawed; it’s not that things are faster, it’s that the address space is larger. On a 64bit processor, memory addresses are expressed in 64 bits rather than 32 bits, and the same is true for processor opcodes (i.e., machine language instructions).
So in truth, on most 64bit processors, the most important difference as compared to their 32bit counterparts is that they need twice as much data to be pumped over from (slow) memory to fetch the next instruction, and they also need much more stuff going to the memory controller and cache for memory addresses. Also, usually the native int address type will be 64 bit, so it will be a bit faster on ‘long int’ math.
Since, however, few applications need to store more than 4G of data or do loads of long int math, in practice, on most architectures, going to 64bit mode usually slows things down. As Philip already indicated, the only reason why this is not the case on Intel/AMD 64bit processors is because the x86_64 processor architecture has twice as many registers, allowing the processor to store so much more internally, which significantly reduces the number of times it needs to access memory.
LikeLike
Wouter: note also that “twice as many registers” still means only 16 — finally, Intel is on par with ARM! Only needs to double once more to get a decent number! — a couple of which are not very useful (like the stack pointer, which has gotten even more ridiculous than it already was). It’s still an Intel Architecture CPU after all… Collection of hacks and all that.
What makes applications compiled for x86_64 “faster” on certain workloads is the presence of new special purpose instructions. If those instructions on the workload you’re testing provide a sufficient performance increate to compensate for the slowdown of losing your cache, you “win”.
In many workloads, winning is rare.
LikeLike
Apology accepted 🙂
Also wanted to note I’m running Vista 64 on an Dual Quad core Xeon machine (that’s 8 cores yes) with 12GB RAM, but it runs just as smooth on my Macbook :p
LikeLike
Finally found time for a decent reply 🙂
First of thanks to Philip & Wouter for the amazing & interesting comments! (as usual 😉 )
I knew that the freeway image was flawed (and said so) but still choose to use it. To explain it a bit more, specific 64bit data types do use the possibilities and in using 32 bit data types where 64 bit data types could be used (encryption, audio conversion, ..) seems like a waste to me. And the street analogy seemed like a good way to put that. Even though that doesnt apply to most of the programs. I like encrypting stuff 😉
And although the logic says that the longer adresses, variable padding and whatnot will in some cases take down the performance, the benchmarks I linked to in the post point to the contrary. (Hooray for benchmarks)
And for those who stumbled into this and want to read up on the whole thing, do check the relevant wiki page! 🙂
LikeLike