Tipping the Edge

How do we make software more network aware?

Tim O’Reilly recently talked about the evolution of software and how all software should be network aware. While I generally believe that this is true (see my February 2000 article on Hybrid computing), I’d like to make a few comments on Tim’s note.

Discoverability and Security

The first assumption is that software should be able to connect automatically. While this is generally a good idea, there is a need to set up different levels of accessibility. Businesses generally will want some level of controls over how accessible a machine is. From here, one must establish a level of trust to handle relationships between machines. This should include some basic categorization, user being the lowest level of categorization, then raising up to different groupings. For example, a user could be a member of a team, that team could be a subset of a larger division, which itself would be a subset of a company, which would be a subset of an industry. The idea here is to automate the process of identification and still ensure a degree of trust in order to maintain security. While a piece of software can be network aware, the network may not necessarily want it to be aware of all resources. For example, if I am a visitor at BigCo, BigCo may not want me to have full control over all their resources and full access to all their services. My machine should broadcast my credentials and, based on that, have access to certain resources.

Why Buddy Lists are NOT the way to go

Tim advocates the use of buddy lists to set up those relationships. I would venture to say that buddy list do not provide the level of granularity required. From there, there are two potential ways to go: enhance buddy list to allow greater levels of categorizations or come up with a completely new format. I would be tempted to go for the former as it is building on top of an existing standard.

Two way data and XML formatting

Tim makes a point that every piece of software should expose some version of its data as XML feeds. While I generally agree that the data should be represented in a common format, XML being the ideal choice, I object to it being a feed. What applications should provide is an API that gives access to that data instead of a feed. The reason for the semantic disagreement I have here is that a feed is generally pushed or pulled on a regular schedule, no matter whether it is needed or not. Providing an API would ensure that the data is only obtained upon request, therefore conserving precious network resources. A good example of feed misuse was Pointcast, a software client that would poll the network every few hours for feeds. The problem was that it would do so at the same time for every client on the network, thus generating network traffic spikes on a regular basis, and generating much hatred from network administrators.

A proper API could be designed using either XML-RPC or SOAP as a way to carry its messages.

Where does the data go?

The other issue is where the data should reside. As a general rule, computers are no longer well suited as the only repository of data. There is a need to represent data in a fashion that makes it largely independent of the platform it’s running on. A large part of the problem here is who you trust (there’s that trust issue again) with your data. For example, buddy lists in AIM are stored on the  AOL servers. Do you trust them with that data? Would you trust them with more personal data (written documents, etc..) ? Would you trust Microsoft with it? Would you trust anyone?

This brings up interesting possibilities in terms of either keeping the data on a single computing device, from which it might be shared, or moving it in a lot of different places (making it more difficult to ensure change control and general data management). This is an issue that still needs to be resolved.

Online/Offline

The one point that Tim does not cover is the online/offline challenge. One cannot assume that a computer is always connected to the network. As much as we would like it to be that way, computers are often disconnected from a network, whether it is on a plane ride, or when in a place where network resources are limited or inexistent. Programs should be aware of that state and still be able to work properly when offline. As a result, software should have a mode that allows it to check whether network resources are available or not. If they are, it should check the sharing arrangements. If there are none, it should still provide basic functionality.

Posted in:
About the Author

Tristan Louis

Writing and working on the internet since 1993, I've launched 6 companies, of which 2 (internet.com and Earthweb) went public and two were sold (Net Quotient and MoveableMedia). My latest, Keepskor provides tools allowing anyone to develop mobile and connected TV games without writing a line of code. This is my personal site and all opinions here are mine.