Although the iPhone is like a computer, it has significant differences compared to a desktop machine. It’s usage is entirely different. So developer’s should keep that in mind when developing applications.
The screen is compact compared to a desktop monitor, although it’s bigger compared to other phones. So only most relevant features for a mobile device must be included in the application.
Users use their fingers instead of the mouse and the keyboard to give inputs to the iPhone. So GUI elements must be big enough to be touched by the user.
iPhone users use the device when they are on the move. So they need to access important information as quickly as possible without going through so many screens or inputs.
I’m a windows programmer. I had never used an apple machine nor did I knew about MacOS system architecture. But after going through iPhone SDK articles I found out that MacOS and iPhone GUI architecture closely resembles Win32 GUI programming. So I found it very easy to follow up on iPhone SDK with the knowledge of Windows GUI event driven architecture. The GUI framework in iPhone is known as UIKit which is a component of Cocoa Touch as opposed to Cocoa in MacOS.
iPhone SDK uses a language called Objective-C to write programs. At first the syntax was a little bulky for me. But you can catch up on the syntax in 10 minutes. It’s the same object oriented architecture in a different syntax.
iPhone SDK has a nice GUI class hierarchy which reminds me of Microsoft .Net Framework Windows Forms class hierarchy. These classes use delegation to receive and handle events in a way similar to Win32 callbacks and .Net framework delegates. For example, the developer can register an event handler for the “touch” event of a button.
(Portion of UIKit classes. Image extracted from an ADC article)
iPhone SDK encourages developers to use MVC (Model, View, Controller) architecture in their application. This ensures that the GUI is separated from the application logic and the data model.
Just like MacOS and Windows, Cocoa touch framework uses an event-driven architecture to raise and handle events. Unlike traditional keyboard and mouse events on a desktop, the iPhone has “touch” events. When user’s fingers touches the iPhone screen, the touch sensor sends information of each finger touch to the OS. This is how multi-touch technology works. When multiple fingers touch the screen, the event handlers for any GUI elements receive information about all the touches through multiple event objects.
The iPhone SDK API directly supports simple tough gestures such as swipes, double taps and pinches. To handle more complex gestures you can examine the event objects to generate your own custom events.
3 Axes Accelerometer
The iPhone has an accelerometer which can report acceleration data of the device in X,Y and Z axes (see image). Because of the gravity of the earth, the 3 accelerometers record different accelerations in 3 axes.
(Image extracted from an ADC article)
Why do a phone need an accelerometer? Well, using the iPhone, you can do amazing things with the help of the accelerometer. Using it, you can get the orientation of the device in 3D space at any given time. You can create orientation aware applications that responds to the physical orientation of the device. The application can change dynamically depending on how the user holds it. A basic use of this is demonstrated by the built-in Photo application where photos automatically switches to the landscape mode when user holds the iPhone horizontally.
More advanced and exciting use of it is gaming. (iPhone uses OpenGL ES to render 3D graphics). You can combine the accelerometer data into your game to use the whole phone as a joystick. When the user tilts the phone just like a joystick, the objects on the game can respond accordingly to create an exciting user experience. This video of iPhone SDK event shows OpenGL ES and accelerometer in action. It shows an application in which the user can “shake” the phone to perform undo operations and a 3D game where the gamer can fly the fighter jet by tilting the iPhone like a joystick.
Applications can register an event handler to receive acceleration events. A reporting frequency can be specified. The default is 100 times per second. This is good for games. But you should use a frequency to match your application’s needs in order to save the battery life.
Since iPhone is a mobile device which runs on battery, every effort must be made to make your applications efficient. Developer should take care of releasing resources which are not needed. Mac OS X natively has a garbage collector to free up resources (wow, isn’t it! Windows doesn’t have it!). But iPhone doesn’t have the luxury of a garbage collector since it’s restricted to a low power consumption environment. The SDK uses reference counting mechanism to keep track of the resources which are in use.
One Application At a Time
iPhone can run only one application at a time. Of course, the OS kernel and few low-level demon threads keep on running all the time, but only one user application can be run at a time. This ensures low processor usage and the applications get to have the full resources of the device at their expense. So if a user receives a phone call or presses the Home button while they are working on an application, that application really exits. This is a one major fact that iPhone application developers should take into account.
Although there are some benefits of this exclusive application execution, developers have to make their applications to adapt to this environment. Suppose a user receives a phone call while they are editing a photo in a photo editing application. Application exits and phone call application launches. The application should be able to restore it’s last working status when the user goes back to that application after the phone call. So the developer must save the status of the application when it quits and restore it back when it is launched again. All application receive notifications when the system is going to quit the application. As I understand it, it maybe something similar to the “Kill” signal in Linux.
This is an important design aspect in the iPhone. Almost every built-in applications have this restore feature giving the illusion that the application keeps running in the background when it is interrupted. What they actually does is, save the state when they are exited, and restore themselves back when they are launched the next time.
iPhone SDK comes with a rich set of tools which can aid developers in developing iPhone applications. XCode is the IDE for iPhone applications. XCode only runs on Mac OS X or later. It has instruments which can remotely monitor and debug your applications. Deploying applications is easy. Only a single button click; your application is compiled and loaded into the iPhone. You can build your application into a real iPhone connected to the computer or you can use the iPhone simulator that comes with the SDK. The aforementioned video shows the remote application performance monitor, monitoring the performance of the 3D game. I must say, it’s as good as Microsoft Visual Studio 2008 (if not better). As a .Net framework developer, I find it easy to adapt to iPhone development.
This article only describes most exiting features of the iPhone SDK. There are many more features such as OpenGL, Media framework, Animations, Microsoft Exchange support etc… If you are interested I suggest you register on the Apple Developer Connection (free). You gain access to lots of documents, videos (through iTunes) and sample applications.
Finally, I must say, iPhone truly is an amazing device which opens the door for a new generation of mobile devices. Apple has developed a flexible framework and tools that developers would love to develop on. They launched the iPhone SDK on March 6 (watch videos). They will release iPhone 2.0 software update on June. As for me, I’m just waiting until my iPhone arrives on April. Let’s hope for the best!