You are viewing posts where topic is Technology. To view all posts, you can click here.
Safari for Windows Debug Menu
by Jon on Monday, June 11, 2007 file under: Technology

In an interesting turn of events, Apple released Safari for Windows. Well, they didn't quite release it, its still a beta but in general works very well (this post is being made from one of Safari 3's resizable text fields).

If you're a web developer, Safari's Debug Menu is a necessity. Fortunately for those of us stuck in Windows, Safari's Debug menu is still available, however, it appears you have to get your hands dirty to enable it.

On a Mac, you'd open a Terminal and type:

defaults write com.apple.Safari IncludeDebugMenu 1

As far as I can tell, Windows doesn't have Apple's defaults utility, but plists, where the preferences are stored, and the Debug menu is enabled are just plain text XML files. To enable the Debug menu in Safari for Windows, add the following key, value pair to c:\Documents and Settings\your username\Application Data\Apple Computer\Safari\Preferences.plist


Before the closing </dict> element and restart Safari.

It seems a little more spartan than Safari for Mac's debug menu, but at least it includes a JavaScript console and User Agent switching.

Permanent link to Safari for Windows Debug Menu

Unix is Easy
by Jon on Tuesday, May 1, 2007 file under: Technology

You just need to know what to tell it.

Permanent link to Unix is Easy

Nokia N800
by Jon on Wednesday, April 25, 2007 file under: Technology
Nokia N800

A few weeks ago I purchased a Nokia N800 from a floundering Comp USA in the Phoenix Metro area (from my vantage point, it appears that all CompUSAs in Phoenix Metro are closing). I always liked the idea of the Nokia 770 when it came out, and the N800 seemed to be enough of an upgrade for me to consider it very compelling... compelling enough to take the leap.

I was looking for several things:

  • a short range laptop replacement
  • a personal organizer
  • an internet appliance

Let me expand on those three things. Currently, Kortney and I have a nearly 4 year old TiBook that's still kickin' it hardcore. Its a great laptop, with a nice rugged, metal frame, beautiful display, and it still has enough power for what we do day to day. My only complaint is that when I'm carrying two big text books for class, it adds quite a bit of weight to my bag. Plus, when I take it, that leaves Kortney without a Mac (but with a dual-screen Linux box!), and without the computer configured for her most efficient use.

I wanted a device which would allow me to leave the laptop at home when I went down to school, but wouldn't require me to a) carry a full size notebook computer with me, and b) give up any of the functionality of a "real" computer.

I also wanted a small device because I thought it would help me keep track of all the stuff I need to do for school and at home. Kortney and I both keep everything in iCal and subscribe to each other's calendars. I have a laundry list of to dos and all my contacts arranged (many with pictures), in Address Book. I wanted something that would let me take those repositories with me, and allow me to edit them remotely, and sync back up when I got home. My phone can already do that (sans to dos), but adding a calendar entry with a numeric keypad is an exercise in frustration.

Finally, I wanted something that would allow me to get on the internet. A device that would allow me to ssh into my home network, get and send email, browse through my RSS subscriptions, waste time on the internet. I liked the idea of being able to connect over WiFi or Bluetooth on my laptop, so a portable device should do that as well.

When the Palm TX was released, I thought that was the device that was going to do all those things for me, but it had one problem: it came with an aging version of Palm OS that I probably wouldn't be able to update when Palm decided they couldn't milk that cow any longer. When the 770 was released that replaced the TX as the device of interest, and when the N800 came out this past winter, I was ready to pull the trigger (well, the device release, and the fact that CompUSA had slashed the price).

So, does it live up what I was looking for?

With the addition of a Bluetooth keyboard, I think the N800 would work great as a short range laptop replacement. It already has a fairly large application library thanks to the 770. The screen is bright and crisp, for a Unix-geek, all of the standard utilities are there, and for a non-Unix geek you can get most everything you need, including word processors (AbiWord) and spreadsheet programs (Gnumeric). I'm a bigger KDE fan then Gnome, so its unfortunate that the majority of applications are Gnome ports, but there are several people who have full KDE stacks running on their N800's, so that may be a possibility down the line.

As a PDA, the N800 has some room to grow. If you are just using the N800 to keep track of appointments, contacts, and tasks, or if you are using Linux as your primary desktop, the GPE PIM suite will probably work for you. However, if you are looking for something that will sync with iSync and all of the Apple apps like I am, the N800 is like a little island. I have had success pushing calendars to to the device, but in a read only capacity, and have had no success getting modifications back from the device.

Like most open source apps there is also a divide in how things should be done. The N800 comes with an address book repository which other pre-installed apps use, but third party apps all seem to use their own contact database, providing no way to sync between them. I know the GPE PIM suite was developed tangentially to the Nokia Internet Tablets, but when they were ported, they should have used the Nokia address book. There way may have been infinitely better, but as a user, I don't care, I just want everything to use the same data. If anyone does work on creating an iSync plugin for the N800, I'm guessing they'll also have to choose a collection of applications to support. Which means it may not support the programs I'm using on the N800. As a PDA, the N800 may be useful to some, but has a way to go before it will be useful for me.

Fortunately, as an internet appliance it excels. I can get on the internet practically anywhere I get phone service (using my phone as a Bluetooth modem). My home screen has the weather forecast for the next 5 days and the latest RSS feed entries. I can stream internet radio to the device, including BBC. YouTube videos are shaky, but watchable. And the Opera browser is fantastic. I am surprised Nokia went with Opera when they have been using WebKit on their Symbian phones, but really, Opera does a great job.

The N800 still has a long way to go, but is a relatively young platform. The package manager is great at keeping apps up to date, but it seems that essentially every app has its own repository. Coming from a Gentoo, FreeBSD/MacPorts background, its really nice being able to install any application from a single repository.

The device also seems to have an identity crisis when it comes to being operated by stylus or fingers. Many of the buttons are much to small to accurately hit with even my tiny fingers, but there are other facilities that literally enlarge to facilitate thumb navigation. I would like to see finger navigation expanded in future releases of the platform software.

Overall, the N800 is a fun little device with a lot of potential. I have a fear that in June, however, any of its merits will be overshadowed by the iPhone. I've read that usability is high on Nokia's list for its next software release, but they are up against the kings of usability. On the open source front, developers need to start thinking like Apple: open source is a great foundation, usability is the ultimate goal.

Permanent link to Nokia N800

On Web Scaling
by Jon on Tuesday, April 24, 2007 file under: Technology

At the beginning of the year I was doing a lot of web scaling work, which wasn't dissimilar to the web scaling work I had been doing about a year prior to that. There's a general problem that every semi-successful web developer is going to have to face at some point during their career: a small fraction of web users want your content, but, there is a gigantic number of web users.

The app I'm working on at the moment gets around 50,000 requests per server during peak traffic hours (including static and dynamic content). Its currently spread across multiple servers with a hardware load balancer in front of it. When I came back to this project (I had moved to other things and this project had certain upcoming opportunities that interested me) all of the servers in the cluster were pegged, with a load equal to the number of CPUs in the system. (For the non-Unix guys and gals, your load average shouldn't get above the number of processors you have in your system. For most systems that would be 1.0, but servers are generally higher then that.)

I combed through the app looking for bottle necks, and was able to significantly reduce the load on the servers by just specific ways the app was written. No hardware was changed to drop the load significantly. With a few changes, one server in this cluster could now handle all of the traffic (of course, for redundancy reasons, there are multiple servers). So how can we go from take an app from snail to a cheetah? Lets find out.

A few more details, this project was written in PHP, which can scale very well, if you don't bog it down. Many of the techniques, however, are applicable to any app.

  1. Cache Everything You Can

    Are you regularly fetching slow-to-change data from a web service, RSS feed, or some other source? Cache the result in a fast cache.

    Don't know what facilities your scripting language has for caching? I would wager to guess BDB, GDBM, QDBM, or some similar file based database is available in your language of choice. These database are simple key/value stores that reside on disk. They allow you to insert data, update data, query if data is available, retrieve the value of some data, or delete the data from the database. There's no SQL and there's on a single key namespace per database, but these databases are fast. Let me rephrase that: extremely fast.

    Hitting a file based database is going to be significantly faster then hitting a remote resource, and that means your web server is spending more of its time serving content, and less time in a wait state. That also means connections are completed faster which can reduce the load on your server.

    Remote resources are obvious, but what about SQL queries? Cache those too! If your database abstraction layer doesn't support caching, or you're not using a database abstraction layer, find one that supports query caching. Caching queries reduces the load on your database, which frees up your system for doing more important things, like servicing that next incoming request!

  2. Reduce the Number of SQL Queries per Page Load

    Put in some kind of monitor which shows you every query which is being run during a page load, and see how many queries are executed. If it takes more queries then you have fingers, then you probably have some room to optimize.

    I found a fairly static page in my app that was executing over 20 queries to build the page! Ouch! After looking at what the page was doing, I was able to reduce the number of queries to 5.

    What can we look for when reducing the number of queries? Are you doing SQL queries in a loop, recursively doing queries, or querying several times from a small table? You might be able to read the entire table in once, and reference the in memory version from then on. Generally, one query to get 10 rows is much faster then 10 queries which get a single row each, especially if your database isn't local.

  3. Optimize your Database and your Queries

    If you can't reduce the number of queries your making, at least make sure your queries are fast. Make sure your JOINs are on equivalent data types and on indexed fields. If you're looking for data, make sure your looking in an indexed fields. Full table scans can hurt, especially if a table starts creeping up to the gigabyte or several gigabyte range.

    Run an explain plan on your queries. If you don't know what an explain plan is, learn how to run one on your database of choice, and how to read the output it provides. You may find that what you think is a highly optimized query is actually doing a full table scan early on, or not using indexes as you expected.

  4. Pre-render the Pages

    What's better than reducing the number of queries or optimizing them? How about eliminating them completely! How often does your front page change? Only when there's an update? Why not cache the pre-rendered page until something changes?

    Web servers are generally very efficient at serving up static content. Pre-rendering pages means they're just sending static files which are probably cached in memory by the OS. When you write to the database, you can either re-render your pages, or delete the entry from your page render cache. This makes sure the page always matches the data, but reduces processing to only when its necessary.

  5. Compile Your Scripts

    PHP has several opcode caches. Python can be compiled into an opcode. Use these to your advantage. Don't want to do it yourself? In PHP you can use APC to do it for you.

    Generally, when a script is requested, a web server will ask a script interpreter to run a script and give it the output. The interpreter will in turn read the script file, parse the script file, execute the script file, potentially reading in additional script files which it must also parse and execute. Wouldn't it be nice to cut out all the reading and parsing and get right to the good stuff, executing? APC will compile your PHP scripts into an in memory opcode cache so that when they are requested, they are ready to be run by PHP, no reading, no parsing, just executing. For PHP apps, APC (or another opcode cache) are indispensable for high traffic sites.

  6. Use Built-in Functions and Classes

    PHP comes chock-full of built-in classes and functions which are all written in C. In all but the most unusually circumstances, these built-in functions will blow the socks off anything that is written in a scripting language, so take advantage of that.

    To top it off, some PHP libraries have PHP extensions which replace functionality implemented in PHP with functionality written in C. ADOdb is a great example of this. You can use the purely PHP based version of the library for database abstraction. When you use the ADOdb extension, you get an instant performance increase without changing a line of your code.

  7. Only Do Things Once

    I've said this in various forms above, but the key principal is to do something once, and only once. Am I going to show an identical page to 100,000 people? Ok, lets make sure I generate that page exactly once. Do I need to show 10 different pages to 10,000 people? Ok, lets make sure I generate 10 different pages once, and lets find out where I can cache as much data common to all the pages as possible, so the cost of rendering additional pages is reduced.

  8. Back to the Basics

    Remember that Singleton pattern you learned about in your Design Patterns class? What you didn't learn about the Singleton pattern, and you didn't take a Design Patterns class? Singleton classes go hand in hand with the previous point.

    I only want my app to use one database object, so guess what, I'm going to enforce that with a Singleton. I only want one class to handle query parameters, I'm going to enforce that with a Singleton. I only want to create one object to communicate with a web service; again, I'm going to do that with a Singleton.

    Use other people's code. There's tons of free software available to do just about everything (everything except the specific domain your tackling, that's why you're tackling it!). So piece together libraries which are high performance and well tested.

    Profile your code. Find out where your app is slow, and fix it. If you can fix a slow function or method which is called several times, your speedup might grow significantly.

  9. Get Creative

    • Don't use a database

      Use a flat file, use a file based DB, heck, create a const if the value changes once in your career. Many web developers can get overly database centric. Database are good for relating data, they're not necessarily good for general purpose content stores. That's what file systems are for. You have one. Use it.

    • Don't handle authentication

      Let your web server do it for you with basic authentication. That's less your code has to worry about authentication, and possibly more you can cache (for read only pages). Your app can generally get any of the authentication credentials from the web server so why waste your time (or your interpreters's time) building something that web servers have been doing for years.

    • Do it in CSS and JavaScript

      Many things can be solved on the client side. Keep your scripts mean and lean and spit your data out with minimal semantic markup. Your CSS and JS can be cached on the other side reducing round trips, and reducing the amount of data you're sending per page request.

Those are just a handful of pointers which have served me well over the past few years. These all might seem basic to you, but a large amount of professionally written code I come across never takes performance or salability into account. The general excuse is that it works for the amount of traffic at the moment. One day you'll get popular though. You'll get posted to Digg or Slashdot, and you'll be happy your code scales. Either that, or you'll be trying to put up a static page which apologizes that your content has gone missing.

Permanent link to On Web Scaling

Automate iSync with launchd
by Jon on Friday, March 2, 2007 file under: Technology

Back in July, I wrote about scripting iSync so it would automatically sync my phone every night via a cron job. The solution wasn't perfect, but it worked pretty well.

Since then I've become fond of Launchd. I've written LaunchDaemons to keep run my nightly backups and keep Darwin Ports in sync. While reading the Launchd entry in Wikipedia yesterday, I came across this section on LaunchAgents:

The LaunchAgents folders contain jobs, called agent applications, that will run as a user or in the context of userland. These may be scripts or other foreground items, and they can even include a user interface.

Let me back up a little bit. While my AppleScript + cron works "pretty well," it is by no means perfect. The biggest flaw is that if I'm not the active user, the script fails. That means I can't automatically sync Kortney's phone every night, because one of us is going to be the active user, meaning the other person can't sync. The underlying issue is that programs launched from cron can't connect to anyone's display but the active user's, and if the active user is different from the user cron is running as, the program requiring use of display dies a horrible death.

That's where LaunchAgents come in. LaunchAgents have the ability to run a task at a given time or given event, can be run as a specific user, and can run tasks which require a user interface. Bingo!

I've modified the script slightly since the last time I posted it, so here it is again. The major change is now the script uses iSync's return status as an exit code instead of just returning the value.

-- This script will tell iSync to synchronize.  if there's
-- more then one device attached, I don't know what that
-- means.
-- hints from
-- http://growl.info/documentation/applescript-support.php
-- http://www.macosxhints.com/article.php?story=20031201172150673
-- Author: Jonathan Hohle

tell application "System Events"
	set growlIsRunning to (count of (every
	  process whose name is "GrowlHelperApp")) > 0
	set iSyncIsRunning to (count of (every
	  process whose name is "iSync")) > 0
end tell

if growlIsRunning then
	tell application "GrowlHelperApp"
		-- Make a list of all the notification types 
		-- that this script will ever send:
		set the allNotificationsList to {"Result Notification"}
		-- Make a list of the notifications 
		-- that will be enabled by default.      
		-- Those not enabled by default can be enabled later 
		-- in the 'Applications' tab of the growl prefpane.
		set the enabledNotificationsList to {"Result Notification"}
		register as application "iSyncScript"
		  all notifications allNotificationsList default
		  notifications enabledNotificationsList
		  icon of application "Script Editor"
	end tell
end if

tell application "iSync"
	-- wait until sync status != 1 (synchronizing)
	repeat while (syncing is true)
	end repeat
	set syncStatus to sync status
	set lastSync to last sync
end tell

set syncStatusText to ""

-- syncStatus = 2 -> successfully completed sync
if syncStatus = 2 then
	set syncStatusText to "Successfully Synced"
	set syncStatus to 0
	if syncStatus = 3 then
		set syncStatusText to "Completed with Warnings"
	else if syncStatus = 4 then
		set syncStatusText to "Completed with Errors"
	else if syncStatus = 5 then
		set syncStatusText to "Last Sync Cancelled"
	else if syncStatus = 6 then
		set syncStatusText to "Last Sync Failed to Complete"
	else if syncStatus = 7 then
		set syncStatusText to "Never Synced"
	end if
end if

if syncStatus = 0 and not iSyncIsRunning then
	tell application "iSync" to quit
end if

set displayText to "Status: " & syncStatusText & " (" &
  syncStatus & ").  Synced on " & lastSync

if growlIsRunning then
	tell application "GrowlHelperApp"
		notify with name "Result Notification"
		  title "iSyncScript" description displayText
		  application name "iSyncScript" icon
		  of application "iSync"
	end tell
	display dialog "syncStatus: " & syncStatus
end if

do shell script "exit " & syncStatus

Again, I saved this to ~/Library/Scripts/AutoSync.scpt directory, and can call it from the the command line with `osascript Library/Scripts/AutoSync.scpt`.

Instead of scheduling this with cron, lets create a LaunchAgent for launchd to run. This is probably one of the more simple uses for launchd; its just going to run this script once a day at 4:15am. Here's the plist:





        Nightly iSync Sync




I'll quickly highlight the various fields in the plist. Label is an arbitrary string assigned to the job (it should be unique from other jobs); its just a nice human readable identifier. LowPriorityIO and Nice are just letting launchd know that this job isn't very important, if something important is going on when this job runs, let that other thing take precedence. ProgramArguments are what we're going to run: the osascript program with our script as the only argument. ServiceDescription is a human readable description of the job. StartCalendarInterval is when the job should run. This is similar to cron, put in the constraints you want, and leave out the ones you would have *'d in cron (Hour = 4 and Minute = 15 means this will run every year, month, and day at 4:15am).

Take that plist, modify it to your liking (or use Lingon to create one for you) and save it to ~/Library/LaunchAgents/local.isync.sync.plist. Now launchctl can be used to load and run the LaunchAgent. Fire up a terminal and run the following to load the LaunchAgent:

launchctl load ~/Library/LaunchAgents/local.isync.sync.plist

Now run it to make sure it works:

launchctl run local.isync.sync

If everything worked correctly, iSync should pop open, sync whatever it is you want to sync, and quit iSync. Now that the job is loaded, it should run at the interval you assigned. Not only that, but you don't have to be the active user for it to run, meaning multiple users can load the same (or similar) job sometime in the middle of the night. The only drawback I've found is that you must be logged in for the LaunchAgent to work (you don't have to be the current user, but you have to be logged in). UPDATE: I don't know what I was thinking, but you still have to be the active user (or console owner as Apple puts it). So this doesn't really solve any of the shortcomings of cron. Looking at Apple's List of daemonable frameworks, it appears neither AppleScript or SyncServices are daemon safe. Safe to say, I still like launchd, and will continue to use it to automate things in place of cron.

That's just the tip of the launchd iceberg! Happy hacking!

Update: I've received a few emails mentioning whitespace issues when copying and pasting the above AppleScript, so I've uploaded a binary here.

Permanent link to Automate iSync with <code>launchd</code>

PHP 5 Type Hinting
by Jon on Tuesday, February 20, 2007 file under: Technology

Since PHP 5.1, developers have been able to add "type hints" to function and method declarations. This is a huge boon for OO development in PHP, and promotes use of defined objects instead of hashes, arrays, and invariants as well as helps find errors which may otherwise go unnoticed.

In the past, a variable of any type was able to be passed to a PHP function. Say you wanted to pass a database connection object into a function, you might declare something like this:

function grab_data(&$connection, $id)
  if (!$connection->isConnected()) { return null; }

  return $connection->getAll(
    'SELECT * FROM table WHERE $id = ?;',

That's a relatively simple function. But wait, what if $connection is a string? Or null? We better add some error checking!

function grab_data(&$connection, $id)
  if (!is_object($connection) ||
    get_class($connection) != 'ADOConnection' ||
    !connection->isConnected()) {

    return null;

  return $connection->getAll(
    'SELECT * FROM table WHERE $id = ?;',

That's safer; the function above ensures we're getting an ADOConnection object. It's also much more verbose. And what if the object isn't an ADOConnection, but a class derived from ADOConnection? Do we really want to update this method for every subclass of ADOConnection? That could get ridiculous, plus we can't shield this method from being used by others who might not realize how fragile it is. Type hinting to the rescue.

We want all the safety of the above code, with the benefit of allowing subclasses to be accepted as well. We can get that by a slight modification to the original method:

function grab_data(ADOConnection &$connection, $id)
  if (!$connection->isConnected()) { return null; }

  return $connection->getAll(
    'SELECT * FROM table WHERE $id = ?;',

All that was added was the ADOConnection class name before the $connection parameter name. PHP Does the rest of the work for you. Now this method will only accept a $connection parameter which is an ADOConnection, we don't have to worry about other types of data being passed to this function; we also don't longer have to worry about null variables, the type hint check will take care of that for us!

What do you mean we no longer have to worry? To be frank, you're program will end with a fatal error if this function is called with an incorrect data type. That might sound like it sucks at first, but it means that your program will die quicker, meaning you can find bugs quicker. It also means the stack trace you get will include the line of code which called the type hinted function, not some line in the middle of the function which provides no hints to what called it with the incorrect type to begin with.

What happens if we call grab_data with an incorrect data type? Lets see:


function grab_data(ADOConnection &$connection, $id)
  if (!$connection->isConnected()) { return null; }

  return $connection->getAll(
    'SELECT * FROM table WHERE $id = ?;',

$connection = 'some string';

grab_data($connection, 3);

PHP Fatal error:  Argument 1 passed to grab_data() must
be an object of class ADOConnection, called in
/home/test- on line 12 and defined in /home/test- on line

Fatal error: Argument 1 passed to grab_data() must be an
object of class ADOConnection, called in /home/test- on
line 12 and defined in /home/test- on line 3

shell returned 255

That's great! We die early so bugs show themselves sooner, we declare parameter types, so callers know exactly what they should be passing in, and we save quite a bit of manual error checking required to make our methods safe.

This also allows you to define and use classes like C structs (a class which contains only public members) for passing data to methods and functions in place of arrays and hashes, which is so commonly done in PHP. Creating and passing a struct to a type hinted function or method ensures that you know exactly what fields and methods are available to you. You also don't have to worry about misspelled string hash keys. You can let the interpreter do all that work for to ensure you're accessing valid fields and methods.

Are there any drawbacks? Unfortunately, the only native type that PHP allows hinting for is array. Hinting would have additional benefit if you could hint native types in a similar fashion to the PHP documentation (e.g. function explode(string $delimiter, string $string, int $limit = null)), or leave them off completely if you allow mixed types as inputs to your function (and for backwards compatibility). Hinting variable as strings, booleans or any of the built-in types would reduct the amount of error checking safe functions require.

This is also a feature of PHP 5.1 or later and is not compatible with earlier versions of PHP. This could potentially make you're apps less portable. There are also many hosting providers still running PHP 4, which has no support for type hinting at all.

Finally, there is no facility for hinting return types, meaning even if you know you're input to a function is good, you don't necessarily know the output is as well.

If you are running PHP 5.1 or later, type hinting can be a great way to find an eliminate bugs, before they bite you. It can also allow simpler and safer functions and methods that are also self describing. If you have no backwards compatibility requirements, you might want to give type hinting a shot!

Permanent link to PHP 5 Type Hinting

I'm a Big Fan of C++
by Jon on Monday, January 22, 2007 file under: Technology

When you read Digg, you're bound to find nonsense, and I try not to bite, but sometimes you get irked. An article written by Jeff Atwood appeared on Digg over the weekend which links to a two part interview with the designer and original implementor of C++, Bjarne Stroustrup.

I have long defending C++ as a great programming language, especially to Java programmers who seem to only appreciate the flexibility of C++ when the features, which they previously derided, are added to Java.

In Jeff's post he has a big block quote from Bjarne and then proceeds to blast C++ on two points which are aren't really valid. His first, C++ is fast but unforgiving, is only that people don't have any more brain capacity but computers have quite a bit more computing capacity, so lets waste some of that computing capacity to save the poor programmers. I was thinking aloud this weekend why my current computer is seventy-two times faster then the computer I cut my teeth on, but doesn't 'feel' any faster.

His second point is that C++ is designed to be extremely flexible. Wait, that sounds like a benefit, not a drawback? He then goes on to say that it can be used to write operating system kernels and device drivers. He might not realize it, but everything in the KDE stack is C++. I don't know of any websites that are built in C++, but I don't believe it would be as dangerous as he presupposes because of the great data structures included in the Standard Template Library. Buffer overflows? Who's using char[] to store a string?

C++ offers features that most other OO languages don't: access to pointers and the ability to manage memory, among other things. I'm astounded when I read or hear people thinking these are disadvantages. If you don't want to deal with memory management or pointers, pass objects by value or reference and allocate all of your objects on the stack (i.e. don't use the new keyword). In my opinion, this low level knowledge of pointers and memory management is what separates hobbyists or programmers for hire from engineers. I'm not saying it's necessary for every project, and I like writing scripts in Ruby, but for something that needs to use the least amount of resources possible while running as quickly as possible and still maintain portability, C/C++ are great languages. (And I feel every app which is distributed for general use should attempt to use the least amount of resources possible and run as quickly as possible; if its fast and ugly, people will use it and curse you; if its slow and ugly, people won't use it.).

For some reason I allow myself to become uptight when Java programmers celebrate a recently added feature of Java, when it has been long since available in C++. A relatively recent example (the past few years), has been "generics", which we called templates in C++ before generics were available in .Net or Java. Meanwhile, Java developers sit by while useful features like first class functions and operator overloading (if Sun can do it, why can't I?) escape the grasp of Java programmers everywhere.

I often read that C++ is difficult to learn and use, but must be missing something. I've written projects large and small in C++. I've always thought it was a very natural extension to C, which isn't a very difficult language to learn in the first place.

Best quote from the interview:

The idea of programming as a semiskilled task, practiced by people with a few months' training, is dangerous. We wouldn't tolerate plumbers or accountants that poorly educated. We don't have as an aim that architecture (of buildings) and engineering (of bridges and trains) should become more accessible to people with progressively less training. Indeed, one serious problem is that currently, too many software developers are undereducated and undertrained.

This is one of the most telling statements in the article:

...a friend of mine went to a conference where the keynote speaker asked the audience to indicate by show of hands, one, how many people disliked C++, and two, how many people had written a C++ program. There were twice as many people in the first group than the second. Expressing dislike of something you don't know is usually known as prejudice.

and unfortunately its all to true. I'm sure most Java programmers would feel right at home developing in C++ and if they sprinkled in a few "delete" keywords at the end of their methods and learned about destructors, not only would they have portable code (just a recompile away), but code which is much faster than their Java counterpart.

I've given many programming languages a shot ? Java, Objective-C, Ruby, PHP, C++, C, C#, Perl, JavaScript, and Bash are the ones I can think of off the top of my head &emdash; and C++ remains one of the most elegant. The one's I've stuck with are the ones I found compelling: C because its universal, fast, and familiar; C++ because it provides the power of OOP to an already great language; Ruby because I like the syntax, style, and rapid pace of development; PHP because its a ubiquitous and fast web development language; Objective-C because its a defacto standard on Mac OS X, my platform of choice; and JavaScript because its the defacto standard for client-side web development.

I've left behind Java, Perl, C#, and Bash because I have no compelling reason to use them. Anything I can write in Perl or Bash I can write in Ruby, which I'll find much more enjoyable and find easier to read if I need to come back to it later. And personally, I have no need that Java fills (except, perhaps a paycheck!). If I want something fast and compiled, I'll write it in C/C++. If I need to write something quickly, I'll do it in Ruby!

I'm sure other languages work great for other people, but I'll stick with C++ while keeping my feelers out for anything better (D is intriguing, but I still don't see any compelling

Permanent link to I'm a Big Fan of C++

iPhone in the Times
by Jon on Wednesday, January 10, 2007 file under: Technology

The New York Times has a great article which highlights what I like about Apple the most: their dedication to detail, their commitment to things that work like you'd expect them, and their discipline in not releasing anything which doesn't meet their extremely high standards.

Oh yeah, the article mentions the iPhone, too. I just hope there is a way to get one without a Cingular contract. It seem to be the mobile device I have been looking for (and it just happens to be a great phone as well!).

Permanent link to iPhone in the Times

Automated Blog Tagging
by Jon on Monday, January 1, 2007 file under: Technology

In the middle of december, I finished my first "real" semester of graduate classes at ASU. One of the more interesting projects I completed was a method of assigning multiple classes to blog posts, using a modified version of Ben Kamens' Bayesian Tournament Algorithm, which itself is an expanded version of the standard two class Bayesian spam algorithm described by Paul Graham.

Kamens expanded on Graham's algorithm by moving it beyond a two class problem. With the Bayesian Tournament algorithm, instead of only classifying things as something or not something (typically spam or not spam), one of several categories might be chosen. For example, you might train it to sort your email into work, family, and spam.

That's all well and good, but tagging is catching on like wild fire, and typically a piece of media is given one or more tags. I couldn't really find anything having to do with classifying things with more than one category, so I thought I'd give it a whirl, and the results seemed to be largely successful.

To go along with the project, I created a "blog editor" which was a mocked up and simplified version of where you might enter a blog post, which was hooked up to a classifier I wrote which was trained posts from the The Unofficial Apple Weblog. Here's a screencast showing how the editor classifies your post as you type.

The screencast doesn't have any audio, so here's some commentary. It first shows a shot of a post from TUAW along with its tags (Software, Cool tools, Productivity, Internet Tools). It then pans over to my editor, showing how it classifies as you type, with an author entering the text of the post shown previously. In the end, the classifier chooses Internet Tools, Productivity, Cool Tools, Analysis/Opinion, and Software as the tags for the post, pretty close to the tags chosen by the original author! For the record, this post was not in my training set, and those tags were completely chosen by my classifier.

The paper was rushed, but gives an overview of the implementation for anyone interested. It was a lot of fun building a working classifier.

Permanent link to Automated Blog Tagging

grep Colors
by Jon on Tuesday, October 10, 2006 file under: Technology

I've posted a few times about adding color to console apps in OS X, so I thought I'd post another tidbit I found this past week. The GNU version of grep has the --color option (--colour for all you Europeans), which will highlight your search term in the search results. By default, the color is bold red (ANSI color code 1;31), but by setting the GREP_COLOR environment variable, you can change it to whatever you want. For example, I like green, so I might set it to 1;32.

To get this functionality every time you use grep, you can make an alias:

alias grep="grep --color"

and put that in your ~/.bash_profile, or if you are an admin on your system, in /etc/profile. run source /etc/profile or start a new terminal session, and you should have nice colored output every time you use grep!

To set the color, add:

export GREP_COLOR="1;32"

to the same file you added the alias to, replacing 1;32 with whatever you would like.

Happy computing!

Permanent link to <code>grep</code> Colors

Previous Newer Posts Page 3 of 26 Older Posts Next

hohle.net | hohle.org | hohle.name | hohle.co.uk | hohle.de | hohle.info