PHP-OOdb: Database Interface Abstraction

About a year and a half ago I began work on a multi-user, multi-calendar event management system for my job. It was a side project as something that “might be useful in the future” because the main event management system that we were using was old, buggy, and we had no control over it to expand it. Version one came out and worked fine, but since it was a fairly complex system, especially for one coder to build, I wasn’t happy with the way a lot of the process steps flowed. I also wanted to add more features and move it to the Smarty Template Engine, so I started a complete re-write from scratch.

While working with one of our UI designers and redesigning the database, I found that one of the ugliest parts of my code arose when making any changes to the database. I had abstracted all of the queries themselves into a single Database object that contained all of the methods that I needed to fill in different forms, obtain different lists, and so forth. While this was nice, if I ever needed to rewrite a query for a single operation, I found that a lot of the operations were redundant. It was difficult to update anything when even small changes happened to the Database schema.

At this point, our sysadmin was finally setting up a PHP5 server, so I was able to start using proper objects and wanted to move toward a more object-oriented approach to PHP coding. I have worked on other projects outside of my job which gave me a very good introduction to PHP5 objects. Additionally with a professional coder for a tutor with his years of experience, I felt comfortable that I could do it. This is how PHP-OOdb was born.

My basic requirements for the system were:

  1. It has to play nice with Smarty. Even if I include the files globally to set up the database connections, it had better not make a connection until I actually USE the database. That way, if I decide to pull from a cache instead, I don’t waste the overhead of opening and closing a connection.
  2. It has to be object oriented and in a good way. I wanted to use design patterns to make it more easily understandable.
  3. It needs to be completely generic. I want to be able to use this on all of my other projects to make my code cleaner and my productivity higher. If I can actually use it, then maybe other people will find it useful too.
  4. It has to be all my own code.

I looked into the ActiveRecord design pattern and some implementations with the Object-Relational Model and the most aggravating thing for any of them was handling inter-object relationships. I wrote PHP-OOdb in the way that seems the most inituitive to me.

I have also written up some documentation to go with it. The home page for the system is http://fugitivethought.com/projects/php-oodb. Documentation and examples are posted there as well.

I hope somebody finds it useful. Any bugs or comments or insults, please post here for now. If there is honest interest in it, I can set up forums / wiki / bugzilla and properly maintain the project. If you know of an alternative that is good and you have recommendations, I am very much open to using them instead.

Javascript Tutorial

This is an amazing link that Steve sent me that is one of the most thorough and fun introductions to Javascript I have ever seen. I don’t know how attractive it is for people who have never used JavaScript before, but as someone who has written a decent amount of JavaScript, this is a very nice re-enforcement of how JavaScript works at the basic level.

http://developer.mozilla.org/en/docs/A_re-introduction_to_JavaScript

AquaDock Review

As promised in Mac For A Week post, I have tried to look at one of the Windows solutions that emulates the Mac OS X Dock named AquaDock.  Looks great doesn’t it? I thought so too – at first.

Aquadock

Aquadock

The short and (not so) sweet version of the review is that I am uninstalling AquaDock right now. First of all, the things that are done right:

  • Look – AquaDock looks and feels exactly like the OS X Dock. From the bouncing icons when you are opening a program to the different zooming options. This part I like.
  • Basic Principles – they have the right idea, the dock acts like a combination launcher / taskbar. It puts the little black triangle / arrow beneat the applications that are already open, and if you click one of these it re-opens the existing window (usually).

However, there were a couple of bad issues I have that break any usefulness it would have for me.

  • Doesn’t Always Restore Window! – this happened to me with Firefox. If you have a Firefox window already open, it puts the proper triangle beneath the icon. However, instead of actually restoring the window, it opens a new Firefox window when you click it! I didn’t run across another application that it did this for, but Firefox is my primary application, so that’s enough to annoy me.
  • Not a true taskbar – if open up new programs that don’t have icons on the AquaDock, then it doesn’t create a temporary icon and list it there like the real Dock does.
  • Not Primary Taskbar – I don’t really blame the AquaDock programmers for this one, I don’t know of any decent way to get around it since it is just the way Windows itself is built, but I can’t actually close any windows, since closing them actually exits the application. The closest I can get the Mac-effect is to minimize the window. And when it’s minimized it is still being listed in the Windows taskbar. Even though I can hide the Windows taskbar, this kind of aggravates me too.
  • Deleting Outlook Express – the dock comes with Outlook Express listed in it by default as your mail client (despite the fact that my windows default mail client is Thunderbird) but if I right click the Outlook Express icon to try and delete it, it gives me error messages and refuses to remove it!

Overall Rating: 3 (it works, it has the right idea, but it misses on some important points)

Mac for a Week

This past weekend was a long weekend thanks to Memorial Day (Observed). When I came into work on Tuesday, I won’t deny that I would have liked to take another day or two off. But while I was a good worker and showed up to be productive, my workstation PC wasn’t so enthusiastic; it refused to boot up. Yes, it is plugged in, and yes, I did try switching it on and then off again, and no, I don’t need to put any more IT Crowd references into this story. Anyway, I called tech support, but no answer, so I sent them an email that was supposed to be “replied to promptly”. Feeling like being productive just to spite my computer I had to find another workstation.

Horror of horrors, the nearest computer to me was an iMac (1GB of RAM, OS 10.4.9, 1.25Ghz PPC). Well I have little experience with actually using Mac computers for anything more than “oh, you just have to reboot it to fix it” so I decided that this would be a good time to try and educate myself. I decided to spend the whole week using the Mac, regardless of the status of the other computer. And while I’m at it, why not keep a decent log of all of the things that please me and irk me about the Mac system? Sounds like a useful story. Keep in mind, that this is a 4 day week, so we’ll see how well I become accustomed to the Mac Way after 4 days and if I want to go back to Windows.

Day 1

So the first thing I had to have done was to get one of the other administrators to make me an account. This is actually kind of nice since on most Windows XP machines anybody can go on there and make themselves a new administrator account because they aren’t very well secured. Anyway, once that was all set up I started configuring my tools the way I like them. A couple of things annoyed me right off the bat.

First, I don’t like the keyboards. While this is definitely a hardware issue and not something that is wrong with OS X, the Mac way seems to be to have everything packaged together, so I’m going to complain about the keyboard as I would the rest of the system. The keys are too stiff and I don’t get enough feedback from them. I am accustomed to more of a clickity-clack when I type and I can feel my wrists actually getting a workout while typing on the Mac keyboard. The only thing nice about this is that when I am typing very fast, I seem to be getting few wrong keys because my fingers aren’t actually registering a hit when I brush a key. This may be something I will grow to like; we’ll see.

The next thing that bugs me is the fact that the Home and End buttons do not work as expected. After a day of using the Mac I’m still not sure how they’re supposed to work. Thankfully, they are fairly predictable inside ofDreamweaver. This might be a Macromedia thing or maybe Apple has special rules for special text areas, I’m not really sure yet. When I hit “Home” I expect the cursor to go to the beginning of the current line. In Dreamweaver it does. Anywhere else…. nope. In a textarea form field in Firefox, I can use home to go to the very beginning of the textarea. Since most of my work is done via typing, this is most likely going to aggravate me immensely.

Dreamweaver is working as expected, the only real difference between the Mac version and the Windows version being that the Mac version doesn’t encapsulate everything inside one large window, but rather spreads the different toolboxes out into their own components, similarly how to The Gimp does (sorry if it is a bad analogy, Gimp is the only other application I have used that does this). I also had to change a couple of the default key bindings inside of Dreamweaver so that I could use control-tab to move between open files. Not a big hassle.

One thing that I am actually enjoying a lot is the dock. For those stuck on Windows, it runs similar to something in the System Tray. The icons are links to start the application, but so long as you only close windows and don’t really quit the application, these are also used to reopen the window. For example, I am using iTunes to listen to somebody else’s music collection and when I close the iTunes window, the music is still playing. If i click on the iTunes icon in the dock it opens iTunes back up and shows the music playing. I like this a lot, sort of like a more intelligent task bar. It is also being useful for Thunderbird since the icon changes to reflect the number of new messages I have waiting.

I am also happy to have working shell. As a sometimes-Linux-dabbler I like having the power of bash at my finger tips. Also having vim is nice. I actually used it quite a bit today to debug an image generation script I was working on. One thing that was mildly annoying was finding out that “Console” didn’t do what I expected it to and having to search around until I found that “Terminal” was actually what I was looking for.

The last thing that annoyed me was the fact that I couldn’t use any of the delete keys to delete a document on the desktop. Why not?!? I even tried different combinations with the command (apple) key and control key. I ended up either having the click and drag the document to the trash or just use rm from the terminal.

Overall, the first day went fairly well. It was definitely easier than learning any of the Linux interfaces (sorryKDE, I still love you!) but there are some things that still aggravate me and I am not sure if I can customize them out. It is a good thing I didn’t wait for tech support to get back to me, their first reply came at 5:22PM after I had already left work and that was just to tell me that they needed all of the support numbers off of the side of the computer as well as its make / model.

Day 2

I am getting more comfortable with some of the key bindings. Just about everything is done with the Apple key (or Command Key, I don’t know what the official name of it is…). The standard stuff for copy/cut/paste is Command-C/Command-X/Command-V respectively. Undo is Command-Z, redo is Command-Y. I think I actually like the default binding to close the current tab (Command-W) better than the Windows default of Control-F4, I don’t have to stretch my fingers as much and it is a key combination that I use often. The only thing that I’m still finding aggravating is the Home-End dilema (see Day 1). I really like having the volume buttons built into the keyboard, it is much nicer than the extra small buttons that they put on some of the “multimedia” keyboards for Windows.

The directory structure (colon’s are used where I am used to slashes) is fairly straight forward. Having all of my applications in one folder is nice for when I need to find a program. I’m not sure how nice this is from a logistics standpoint, or if this is just a fake so that it looks that way from the finder, but from a usability perspective this is very nice and intuitive. It is sort of like the “Program Files” folder in Windows, but better organized and Apple doesn’t try to hide it from me by default.

I have been using Firefox as my web browser just because it is what I am already used to, but I have fired up Safari a couple of times just to play with it. As a web developer I have to make sure that my sites don’t go completely awry in this browser, generally this isn’t any real extra work beyond being standards compliant (aka, passing the W3C Validator) which is a good idea anyway. I used to use Firefox because it had a cleaner feel than Internet Explorer, but Safari has an even cleaner feel than Firefox right now. It also seems to be a bit snappier than Firefox, which may be do to some optimizations because it IS the default browser for the operating system. I am on a PPC, so we can’t run Parallels and thus I am stuck with IE 5.5 for testing my work in Internet Explorer, which is slow as sin and twice as bad. But I seldom use IE on my Windows machine anyway, so I don’t feel very crippled by this.

One thing that is kind of interesting to me: Firefox seems a bit slower here than it is in Windows. This is something that I have noted to be true in Linux as well, especially when there is any Javascript involved. It might be a version issue however and I am thinking of Firefox 1.5 with a biased memory on Windows. On Windows I typically use Firefox 2.0.x but I haven’t gotten around to upgrading this Mac’s version yet. According to a quick Google, it seems that Firefox is generally faster than IE at Javascript in general. I must keep this in mind when working on any Javascript tricks for my web applications. We have kept FugitiveThought mostly Javascript-less but some of my other projects have used it a little bit intensely.

I have been using TextEdit to actually write these blog entries, and I must say, it beats Notepad to death on the first bell with its functionality, I enjoy using this far better than Notepad or Wordpad. I don’t seem to be able to bold text with key-bindings, but I can do it using the drop-down menu on top and it actually keeps me from going crazy bolding everything in sight. Overall, it feels clean and snappy.

Current opinion of Mac’s: about twice what it was before starting this little stint =).

Day 3

I realized that I was getting comfortable with the Mac when I found myself trying to use some of the Mac keyboard shortcuts in Windows on my own computer last night. Steve was kind enough to point out to me that the Ctrl-W is the standard tab close key combo in windows, so I guess all of this time I have been reaching up the keyboard to the Ctrl-F4 for nothing! Oh well, I have long fingers, so it really doesn’t make a big difference to me. The Home-End key problems are still the most aggravating thing to me. Thanks to an anonymous poster (hmm, according to IP addresses it’s somebody else in my web development lab!) I found the key combination to delete files on the desktop. Apparently I did not try to universal do-everything Apple key! Since I am using a PowerPC architecture, I cannot run Parallels, and can’t get Internet Explorer 6 or 7 to run on it at all! I tried downloading a Wine port for Mac named Darwine, but it says on the website that it is unable to run Windows binary .exe files on PPC architecture. So what is the point of installing it on a PPC?? If someone can enlighten me on that one, I would be very appreciative. I can see the value of Darwine for Intel Mac’s though, so I am by no means trying to dis the project.

I have gotten very much attached to the dock, I need to find a Windows equivalent that I can use. I have used the similar launcher in XFCE in Linux, but it doesn’t work in the same way. Aqua Dock from Softpedia looks incredibly interesting. I’ll try it out and post a review later on.

Day 4

The technician came today from Dell and replaced my motherboard on my PC, so Monday I will have a PC to go back to. I can’t say that I will miss the Mac, there are some things that aggravated me, but overall I have had a decent experience. The one-button mouse got annoying to me today so that I swapped it out for a regular two button Dell USB mouse for the rest of the day, and I’m happy to see that it works. Gone are the days when Mac requires its own particular brand of mouse (yes, I know it was a long time ago, but it has been a very long time since I have tried out a Mac for real, so cut me some slack here…). With the right click working, the Home-End issue was really the only thing that aggravated me.

Keep in mind that this experience was in a work environment, so I did not play around with a lot of the other features that I would be using if it was my personal machine. The AIM clients, etc are there but unused. From a general usability perspective, the Mac was faster to learn and more intuitive than Windows. A couple of times I found that I was only having issues because I was over thinking the task. Once I got used to the default locations for everything (Preferences are under the first drop-down menu that has the name of the Application, and qiorks like that) I found that everything I used had the same layout, which made learning new applications easier.

One argument that I have heard from Windows and Linux users before was that the Mac wasn’t customizable enough for them, and while I did not do anything too complicated in the way of customization, it was fairly straight-forward and easy to configure all of the settings that made me comfortable in the operating system. It was the fastest system I have ever used to adapt to my own usage patterns. With the integration of the terminal, I don’t think that many people would find whatever customizations they would normally use hard to integrate either. One other thing that was very impressive was that dragging and dropping worked for everything that I tried it for. From rearrange menus to opening music, to transferring files from one application to another. This is matched only by the drag-and-drop abilities of Ubuntu Edgy (I have not yet tried Fawn).

Final Recommendations: FIX HOME AND END KEYS!!!!!!!!!!

TinyMCE

Yay For Open Source!

Hello again everyone. We are currently changing some stuff around in the administrative side of our CMS (a home-brewed blog/comments/content management system). One thing that is making some things easier for us is our move to using Javascript-based WYSIWYG editor for the blog entries themselves. We are usingTinyMCE (http://tinymce.moxiecode.com/) which is actually very nice. I had orginally discovered with while experimenting with Joomla (http://www.joomla.org/) which uses it for a lot of their forms.

I’m happy to say that it works quite well and we’ll definitely be sticking with it (pending Steve’s approval as well of course). There are some things about it that might make it easier (better key bindings for example) so perhaps if we get ambitious we will actually fix these and submit the patches, we’ll see how time treats us.

Anyway, good luck and happy coding

  Justin D.

Apache Proxy

Apache proxys are amazing. Really. Steve and I are administering a few machines for the UConn ACM Chapter and they are stored behind a firewall. We only have one IP address for the entire network, so we have that dedicated to a router that does forwarding on different ports to different machines. Any access beyond that is done via SSH and SSH tunnels (another fun story someday…).

Anyway, we have port 80 opened and forwarded to our main web server that runs the bulk of the website. However, we have other machines that are used as different tools, (test beds for programming competitions, personal servers, etc), all of which we would like to be able to access through the main web-site URL. Each of these sub-machines is running its own web server, so all we really need to do is pass a certain folder in the URI from the ucacm.com site on to that machine. For example, to access the programming competition test-bed, a user from anywhere could enter http://www.ucacm.com/compserver (example only, not the real thing). We put the following rule in the apache configuration file (we’re using Apache2, so this is the file in sites-available that corresponds to ucacm.com):

 ProxyPass http://192.168.1.140/~mooshak
 ProxyPassReverse http://192.168.1.140/~mooshak

This passes all requests from ucacm.com/compserver from the outside world onto the 192.168.1.140/~mooshak address that would otherwise only be available from within our LAN.

The only real issue that we came across is not really a biggie for us, and that is the fact that all of the logs inside the LAN box being passed to will show that access is coming from our main server box instead of the client box, since we are essentially making Apache act as a proxy server.

Smarty Cache-ing

Probably one of my favorite features of the Smarty engine is its cache ability. I have little experience with any other cache systems so I’m not going to do any sort of comparison, but I absolutely love the way Smarty handles caches.

We are using Smarty on FugitiveThought here, which is basically our biggest line of defense against any Digg /Slashdot effects whenever a link gets posted. A few things to note about our site: 1) the front page is dynamic, but if you’ve been here often you probably notice that it doesn’t get updated very often. 2) The comments page (where you can read the blog post and view the comments written below) is slightly more dynamic because we get more comments than blog posts (usually!). This would be more true for something like Slashdot or a larger discussion site.

Caching the Front Page

Okay, so we want a static HTML cache for the front page so that whenever a viewer requests the page, the only thing that has to be done in PHP is to check if the cache exists, and then dump it to the browser. Inside our index.php file, we just need to have the code: $smarty->caching = 2; This turns caching on. The value of 2 indicates that the cache lifetime that we set in the code is unique to this page rather than setting a global.

The code to check if the cache exists is: if(!$smarty->is_cached('frontpage.tpl')) { /* Generate */ }Basically all this does is check if our front page has already been rendered (aka, there is a valid, unexpired cache already saved for it). If this doesn’t exist, then it does the regular stuff like grabbing the blog entries, counting the comments, etc. and then at the very end of our index.php we have: $smarty->display('frontpage.tpl'); Which displays the template.

That’s it! If the template was already cached, then the call to display will just dump the existing cache. Otherwise it will update it with all of the new values and then display it, saving it to the cache.

The only real issue we came across was the fact that the top-right navigation menu that we use for logging in for account and blog management work can’t be cached. If it is then we won’t have our handy links when we log in and the site wouldn’t retain its (at least mildly) contiguous feel. Thankfully Smarty has this all figured out for us too. In the area where we want to display dynamic content, we put the code: {insert name="getLoginBanner"} into our template file. The insert tag calls a function named getLoginBanner in PHP that outputs the contents that you would like to display. This function will always be called even from cached files, so try to minimize its processor time, otherwise the entire point of caching is… well, pointless!

Caching The Comments

For the front page, we can get away with putting a 30 minute timeout on the cache (which makes a massive difference in processor usage if you’re getting 15,000 hits per day from a digg-dotting). But we would like more responsiveness on comments page. We can base our cache-regenerating decision on the fact that the majority of the web users are leeches. That is, there will always be more people reading the page than contributing to it. This is even true if you count all of the spam-bots that we have to fight off! Based on this, we decided to just regenerate the comments page cache for each time a user submits a new comment. So in the comments page, after the new comment is inserted into the database, we run: $smarty->clear_cache("comments.tpl", $index);You see an interesting thing here. Not only is this clearing the cache for “comments.tpl” (the template that describes how to display the comments page), but it is including the unique $index of this particular blog post. Since we use a single template file to describe so many different pages, we want a unique cache for each page. For the comments page, the cache was created with: if(!$smarty->is_cached('comments.tpl',$cache_id))The extra parameter describes a unique version of the template. Whenever a comment is added to the page for one blog entry, only the cache for that entry is regenerated.

Conclusions

Okay, that was a really fast crash course, so if you want more information, I recommend looking at the Smarty documentation. In summary, this will very much help to reduce strain on your server and it is not complicated at all to use especially if you are already using Smarty. I am going to be testing out the PECL PHP Cache this summer one a different site and I’ll report back on how easy it is to use that!

Experiments in Edgy

Overview

So! About three weeks ago, I decided that I wanted to make a transition to Ubuntu Edgy. I have been a Linux user for a while, but I still tended to do a lot of my work in Windows, mostly because it seemed a lot easier and most of the stuff that I needed just worked. I decided to give an all-Linux approach another try (I usually try this about twice a year), and I must say, this time I think I am actually sticking with it. Ubuntu Edgy has fixed a lot of the problems that I have had with previous versions and a lot of other software seems to have significantly matured to the point of easy usability.

A quick note and kind of a disclaimer to start off, though: A lot of people have had issues upgrading from Dapper to Edgy ever since it came out (http://linux.slashdot.org/article.pl?sid=06/10/28/239258&from=rss) and I regret to say that it doesn’t seem to have gotten any easier recently. I did this as a clean install on a blank hard drive, so I cannot help much with the upgrade blues, but I’ll pester Steve to write an entry on how to solve a bunch of those issues because he went that route. This entry is an explanation of how I got a lot of things working inside Edgy that I needed, not how to get Edgy itself working. Also, since I am a KDE fan (no flaming please) I installed the Kubuntu version. I do not think that there are any specific issues that will come up in other versions that I did not have, but I am not an expert, so if there is, do not sue me!

Configuration & Edgy Packages

Alright, with that behind us, let’s get on to the real thing. As I said, I installed the Kubuntu distribution. Since I have to do C/C++ development work for classes this semester, I followed the steps in one of my previous blog entries on How To Install C/C++ Reference Man Pages. Not that this is anything special about Kubuntu – it is just the first time that I have done it, so I thought it was cool.

The first snag in the system was the fact that I wanted to run dual monitors off of a single AGP card (an ATI Radeon 9200SE). I have one monitor connected to the DVI port and one connected to the VGA, and I wanted to have a spanning desktop. This is one of those things that I’ve never gotten to work in Linux before. I tried installing fglrx drivers and the ATI configuration utilities, but none of them were able to actually set up the xorg.conf properly so that it would display. Finally, after much Googling, I came across some newsgroup postings that helped, and eventually I came up with my xorg.conf that actually worked. I am using the radeon driver instead of the fglrx driver because I have seen a lot of instability in the fglrx one, and it ran significantly slower. If you have had issues with the radeon driver in previous distributions, you might want to give it a try again; it seems to be working with greater speed and stability. You can check out my final xorg.conf if you need it as a template to get yours working. It is interesting to note that if you swap between the fglrx driver and the radeon driver, it switches which port (the DVI or VGA) is recognized as screen 1.

After I got the graphics working correctly, I installed the basic tools that come with Edgy that I personally use quite a bit:

  • Amarok – incredible media player. My favorite, and I even like it more than Winamp. The fact that it works straight out of the box with my iPod is part of the appeal too.
  • The GIMP – good image manipulating program (hey, that could be its new acronym!)
  • Inkscape – nice vector graphics program. I use it for some stuff because it’s easier than GIMP
  • Eclipse, and Eclipse CDT – Java IDE for my Compilers class, C/C++ IDE for my Operating Systems class
  • XMMS – I like Amarok better for the media playing, but I am also a fan of XMMS’ alarm clock plugin
  • Blender 3D Modeler – I do not use it much, but a friend was demonstrating it for our local ACM chapter and it looks amazing
  • Firefox and Thunderbird – my web browser and mail-client of choice
  • ntfs-3g – a stable, reliable and awesome hybrid kernel/user-space driver for NTFS partitions. Works wonderfully, and is especially useful because my music and movie collection is all on NTFS.
  • OpenOffice.org 2 – a much improved Microsoft Office alternative.

Automatix

Automatix is an amazing expansion to the regular Ubuntu repositories. Make sure you direct it to use the Edgy repositories when you install it, but that is the default now, so not a big problem. I used automatix to install a few other programs that help a lot too:

  • MPlayer – install this with the Firefox plugin and it will enable you to view any of the videos posted online (CNN.com, etc)
  • Multimedia Codecs – installs a ton of codecs. After that you should be able to play just about any video file made…
  • Firefox – you could install it from here instead of the regular repositories, but I prefer using the original Ubuntu ones
  • Linux DC++ – for downloading Linux ISO’s off of a friends hub
  • Flash Player – installs Macromedia’s flash player plugin for Firefox so that you can play flash games or view flash websites
  • Wine – Automatix does the best job of anything I have ever used at installing a functional and usable copy of Wine. After installing Wine, I installed Dreamweaver 8 (successfully and easily!) so that I could do my web development work. It works great.

Remote Desktop Replacement

One of my favorite features of Windows XP Professional was the remote desktop program. For anybody not in the loop, it essentially lets you remotely connect to your computer from anywhere that has the client (or anywhere that you can run the client from a USB stick!). From there you can full screen it and it is essentially like you are sitting in front of the computer in your room. One of my favorite features of it is the fact that it can forward music, so I can listen to my collection of music from anywhere without actually having to carry the collection around. The best Linux equivalent that I have come across yet is FreeNX by NoMachine. It essentially wraps itself into an SSH session and forwards a fully functional desktop.

There are a few important things to note about FreeNX. First of all, it runs similar to SSH in that when you connect into it, you get a completely freshly started shell (in this case a GUI one). So when you remotely connect, don’t expect to see all of the programs open that were already open when you walked away from the computer in your room. It does, however, support suspended sessions. So when you go to disconnect from your current session, it gives you the option to either terminate or suspend. If there are suspended sessions, then when you go to remote into it again, it offers you the option to resume a suspended session or start a fresh one.

Also, the NoMachine version of NX Server only allows 2 users to login concurrently. If this is all that you need, then I recommend theirs. Otherwise, you can get alternative .debs from these instructions. If you have trouble configuring the system that is also a good resource on how to get it off the ground. The debs from that site are for a GPL-ed version of FreeNX which requires nxclient 1.5 (available from those repositories and for windows users from many other places.

One of the most aggravating things about using FreeNX was that whenever I remotely connected and tried to open a new session of Firefox, it told me that there was already an instance running (aka, the instance I had left in my room). This can actually be resolved using one of Steve’s shell scripts (http://fugitivethought.com/projects/shell-scripts/firefoxcheck), but there were other issues too. I use Amarok as my main media player, and have it using the Xine engine for audio output. Now to play music in my room I had to tell the Xine engine to output to the ALSA driver. To play over FreeNX I had to tell it to output to the ESD driver. Amarok makes it reasonably simple to switch this (Settings -> Configure Amarok -> Engine -> change the Output Plugin from a drop-down menu) but it was one more thing to do that I should be able to just ignore. Finally, combine this with the fact that Alt-Tab inside of FreeNX swaps you between windows in your parent Operating System (whatever OS you are running the client on) instead of on the remote desktop, I had to keep swapping key-bindings to use Ctrl-Alt-Tab instead so I could switch windows inside the FreeNX session. Ultimately, I created a new user account on the server computer that had all of these settings saved as defaults, and I connect using that user name instead now. It solved all of my issues and I am now a very happy FreeNX user.

Internet Explorer

This one will probably make a lot of you cringe. I know I cringed when I first heard about it. But there are still a number of websites out there that either use a bunch of IE only hacks or directly tell you that they only support Internet Explorer and simply refuse to render the page until they are convinced that you are indeed the Microsoft Browser. Thankfully, an enterprising guy over at Tatanka (why is his site named after bison again??) came up with a nice easy program that solves this for Linux users. With Wine installed, run his Ies4Linux program and you get Internet Explorers 5.0, 5.5 and 6.0 installed and ready to roll, complete with the flash plugin. They run inside of wine bottles, so they won’t affect any of your other programs that use wine and hopefully will be relatively safe (hey, its Internet Explorer so no guarantees!) for the rest of your operating system.

Conclusions

I have been using Ubuntu Edgy for a little over three weeks straight now and I am still quite happy with it. I think this is the longest time I have gone yet without thinking “gee, I really need Windows to do X” or even “man, this would be a lot easier in Windows!” If other issues do arise that cause me to change my mind about it, there will undoubtedly be a rant posted here, but I am optimistic. I realize that to a lot of you, the stuff on this page is redundant, but I hope you found at least one thing cool or interesting. Please leave feedback and let us know what you think!

I did originally put most of this information into a presentation for my local Linux Users group as a slightly sarcastic presentation named 10 Reasons NOT to Use Linux, it can be downloaded in Open Office Impress format or Microsoft Powerpoint format. Enjoy!

C man, C man man, man man!

Just a quick note for anybody who is curious what exactly they have to install in order to get all of their lovely C / C++ functions to show up in man (anybody in my class that is doing Linux-based development for the first time might care), the proper package is manpages-dev. This allows you to look up things like strstr, malloc, realloc, sprintf, etc in man from your command prompt so you know exactly why that darned thing isn’t compiling!

If you have ubuntu (and probably Debian too) that would be:
apt-get install manpages-dev

Happy coding!

*Extra Note. If you need the man pages for threads and some other more advanced stuff, install glibc-doc. If you are using Debian, there may be an issue with this because they removed them from apt until people update the pthread_ calls man pages because they are ~8 years out of date.. this may not be true still, but I saw some people having that problem)