Wednesday, October 04, 2006

Faster than light travel

We know that light travels "c" in a vacuum and slower in other mediums, measured with the index of refraction.

We know that scientists have slowed down light to a very small percentage of c.

Is there another state of matter in which the speed of light is greater than c.

Could quantum enganglement be such a state?

Wednesday, September 20, 2006

GPU processing for GIS is coming!

There is a very interesting article over at The Register that has some "insider" info on a new product from AMD/ATI. ATI has created a server product that will make the raw processing power of the GPU available to general applications. This has incredible implications for the GIS software field.

The advent of programmable vertex and shader processing has opened up a reverse-pandora's box in my opinion. A whole lot of good things are coming out of the ability to access the highly optimized floating point processing power available on modern GPUs. However, like the external floating point processors from the '80s there are some significant issues with using this technology.

Stream processing isn't something that is applicable to general software development, but scientific computing can use it to great advantage for problems that fit within it's scope. It is most useful for taking a large block of homogeneous data and applying fixed transforms (guided by variable parameters) on it and storing the results in a similar block of data, that resulting data can then be further processed.

For anyone in the field, the work done by McCool, et. all, at the Univ. of Waterloo on the Sh metaprogramming language has given a glimpse of what was to come. When I first read the book describing Sh, I wanted to use it immediately but found it a bit limiting due to it's static nature and the inability to treat shading programs as dynamic assets as the code you write is basically hardwired into your executable. There is, however, an intermediate format that could possibly be used as a game asset, but it is just a little too funky for use as a general purpose shader generator. This programming model fits scientific (GIS) applications much better. I'm looking forward to seeing what is coming out of AMD/ATI. We are just scratching the surface right now, I think that there needs to be more evolution in the field.

I am planning on making EarthBrowser plugins that make use of this kind of functionality in the future. All the pieces are in place right now for version 3.0, but each plugin has to be written for a specific problem domain.

Friday, September 15, 2006

Goodbye Xena, Hello Eris

The planet formerly nicknamed "Xena" has officially been renamed Eris, which was suggested by it's discoverer Michael Brown. Eris is the Greek goddess of strife, which is what it's discovery has caused. The commotion ultimately led to the demotion of the smaller Pluto to the status of dwarf planet. Eris has one known satellite and that is now named Dysnomia formerly nicknamed "Gabrielle."

Pluto has always been an interesting object for many reasons and many people are upset that it is no longer considered a planet. However I agree with the new designation, it's orbit was too wild, it is too small and the alternative of adding the other newly discovered Kupier belt objects to the list of official planets is not really an acceptable outcome. Pluto has received it's minor planet number of 134340.

Here is the official IAU designation.

Thursday, September 14, 2006

Don't be evil

Well, I guess the other shoe has dropped. Google has now hired the most evil lobbying shop in the country. I recall that they previously acquired their earth imagery with a restrictive license not allowing other companies to license the same data. I can't recall the company this was with, but suffice it to say that the "don't be evil" ruse that has given them the benefit of many a coder's doubt is now inoperative. Too bad, I wanted to believe that a corporation could actually try to do good for the world.

Some people will say that they are just paying them as a form of protection money from the Republican juggernaut. Some will say that it is the responsibility of a publically traded company to maximize it's shareholder's profits. Some will say that maybe they won't have them pull any of the dirty tricks that is the only reason to hire DCI. Sorry, but that is just too many ifs and maybes, as should be, you are judged by the company you keep. Corporations do not need to be sociopaths. Brin and Page should be ashamed.

Google fanboys, join the Republican fanboys. Sadly, you are now the same.

Sunday, September 10, 2006

GML & JP2 a simple concept with real impact

Funny how little it takes to make programmers happy, just make our jobs easier. I mentioned in my last post that AJAX does something fairly obvious and it is a quantum leap in the web browser world. Sadly most people still think of the web browser as the only interface to the internet. Browsers are so needlessly restrictive, I suppose security issues have a lot to do with it, but I think it is mostly a lack of imagination.

That brings me to GML in JP2. Jeff Thurston over at Vector One has a post up which goes into just enough detail to let you know that it is a good idea. Not brilliant, not even tricky in the least. The designers of the JPEG 2000 standard saw fit to allow the addition of text metadata along with the picture in their format. That is all that is needed to add a simple xml file conforming to the GML specification which will allow great things like telling you how the JP2 image(s) are to be georeferenced, you can even add other features like point and polygon features. TIFF allows this as well, but anyone who has looked at the guts of libtiff or seen the contortions need to make an image conform to the GeoTIFF "well known text" spec knows that it is anything but easy.

Why is it a big deal? It is and it isn't. JPEG has comment tags, but I'm not sure if they have a length limit, and I don't really care to check. PNG? same thing. The reason it could be a big deal is because people are talking about it and building a de-facto standard that will be presumably supported in future GIS software. Much like the "well known text" popularized by the ubiquitous GDAL and PROJ.4 packages, a tacit agreement that you check your JP2 images for GML metadata is all that is needed for this to make a significant impact.

But what is the impact of GML & JP2 exactly? By combining the image with the metadata you make distribution, storage, reference and cataloging of such datafiles an order of magnitude simpler. Like my previous post about the "seduction of the one" having one anything makes it so much easier in so many ways. Just think of the all of the possible states inherent in the horrid shapefile format, you have the .shp, the .dbx and possibly the .shx file just to describe some vector data. Right there you have a lot of code to manage all of the possible states of the files. You also have to have data transport functionality that can send and request data from more than one file if needed. It is about an order of magnitude harder to handle a multi-file format than it is a single file one. Not only that, but if you do support a multi-file format you are boxed into not doing something really slick that is only possible from a one file format.

As an example, EarthBrowser v3.0 has a really neat unified data stream architecture internally which I can attach arbitrary metadata to each stream. A simple but powerful concept that can simplify object data interfaces across the board. If I had to make a unified data stream structure that incorporated multiple streams as different parts of the same data, well no thanks. If you really had to do support something like that, you would probably want to pre-process the data to an internal format when reading and just bite the bullet and code it up if you ever had to write that format out.

There you have it; GML and JP2. Highly compressed raster data along with descriptive metadata all in one package. I give it a big thumbs up, at least someone is looking out for us poor overworked programmers.

KML needs an AJAX style upgrade

The cranky older programmer is back for another session detailing the failings of today's "best of breed" applications. Take AJAX, the acronym for a simple concept that has brought about a new excitement to web development, dubbed "Web 2.0." I've read so many excited articles and posts about how great this new capability is and how it is going to bring about a resurgence of the dot com boom. That may be the case, but let me say something about the underlying technology of AJAX.

Cutting away all the marketing hype, the two keys to the whole thing are 1) the ability for a script to download data asynchronously and receive a callback when it has arrived. 2) the ability to change the contents of a web page without the entire page refreshing. Of course there are a lot of third party packages springing up around these capabilities but they all rely on these foundations. From the perspective of a programmer, my question is: what took the web browsers so long? Gee, let's let them specify their own download URLs and process them with a scripting language rather than forcing static urls and the fixed processing of interface elements by the web browser itself. Guess what else follows that outdated content model, yep, it's KML and Google Earth. At least the first few halting steps are being taken in the web development world thanks to Javascript and XMLDom.

Now, why is KML lame? For a simple example, you can write a KML file to show a set of georeferenced data points. That's great, but what if you have a set of points with different attributes that you want to publish and view, but not all at once. Perhaps you want to enable some selection criterion for the data points, such as earthquakes during an arbitrary time frame? A hokey solution could perhaps be ginned up using the network link and having a separate server script parse the link and return the appropriate data. However, how are you going to provide an interface for them to make the arbitrary selection? With KML? At least HTML has Javascript and GUI elements, KML and Google Earth have no easy client side data capture or processing capabilities that aren't hard coded into GE itself. Please correct me if I'm wrong and I'll eat a heapin' helpin' o' crow.

I look at the spectrum of software development going on in various industries right now and am somewhat stunned by the isolation each has. The game industry is doing some fantastic work, the best in the industry, but the GIS field doesn't seem to pay any attention. Web development is a huge chunk of the market but is completely isolated by the web browser. Everyone has web browsers but writing a plugin for all browsers is a hurculean task. I heard that the Opera web browser will include a BitTorrent client. Hurrah, someone is getting it!

I must say again that I think that Google Earth is the best earth explorer out there right now. However there really needs to be some serious innovation and insight done in the software industry in general. I've taken a look at many open source projects out there and am shocked by how many are written in C. Of those that are written in C++ (the only language for serious library and application development work) fewer still incorporate the STL libraries. I've only come across two that use the boost libraries, yikes! I realize it takes a lot of time and effort to keep up with the state of the art and evolve along with the other industry segment's innovations. Perhaps the corporations putting out software today are just unwilling to support the continuing education of their primary product creation assets, the programmers. The programmers themselves need to take responsibility for their own continuing education during non-work hours if they want the be marketable in the future. Finally product and project managers need to have some real programming experience (at least 5-7 years) to be considered competent in my opinion. It's too easy for an inexperienced programmer to bamboozle someone who doesn't know about development work, it is also too easy for a forceful manager to impose unrealistic expectations on a development team.

Let's all celebrate the advent of AJAX, the ability for client side control of what to download and how to process and display it. It only took a decade, at least it's a start. Google does it with Google Maps, think guys.

Sunday, September 03, 2006

Google Earth and KML are outdated

The developers of Google Earth boast that it works well with older graphics cards, that is due to the fact that is based on an aging codebase and was designed with those components in mind. EarthBrowser (as primitive as it was) had the virtual globe market all to itself from 1998 to about 2002 when I first heard about Keyhole EarthViewer when someone from Keyhole offered to buy earthbrowser.com and my customer list. The code is probably still based on their original code developed back then. I can see that they have added more and more on top of that old code. Believe me, you can really pile new features on top of an old foundation but it just gets harder and harder. It gets harder to add new things and it gets harder to change old ones. Just look at Windows or any product that has been out there for any period of time. EarthBrowser itself has been rewritten twice already and I'm on my third complete re-write. A complete re-write is time consuming, difficult and almost never done for a commercial product due to the time, cost and difficulties in such an undertaking of an established product. However, the benefits of such a re-write are immense, for an example, look at the transformation of MacOS since Steve Jobs came back.

Back to Google Earth. Google seems good at publishing APIs for their web services, like Google Maps. Unfortunately for people who use them, they reserve the right to yank the rug out from under you at any time with their licensing terms. The Google Earth API is effectively KML, perhaps they are going to introduce something new after the acquisition of Sketchup which has a reportedly nice Ruby API, but for now it is just static data with the weakest possibility of time based animation through some kludgy use of the "network link." By the way, what kind of hyperlink is not a "network" link? Just wondering.

I don't want to help Google with any of it's problems since they are the main competitive threat to EarthBrowser. I tried writing a file format for EarthBrowser back in 2000 (version 1) that would use static files to control actions in EarthBrowser. It was a very difficult to thing to keep up with. Any new action had to be first coded into the program, then an interface to that code provided and access through the file format. Tweaking that was a nightmare so I tried making my own tiny programming language to embed in the files to provide more flexibility. It worked OK, but that just added another layer of complexity along with the file format and the hand coded feature linkings. You can see the dilema they are facing by adding more and more features into KML that need to be supported in current and future GE versions. They need a rewrite, but guess who will have a new virtual globe out soon (besides EarthBrowser)... Microsoft. I don't know what MS has in store for us but from the looks of Flight Simulator X, it will blow GE out of the water.

Here is discussion of one of the extensions someone has made for Google Earth using KML. It is just so kludgy...

I've learned the lesson of trying to make a static file format control a virtual globe. The next version of EarthBrowser will feature a fully scritable game engine which users and developers can add their own data types and visualizations directly on the globe. As an example, it will be possible to code a module that will download raw data from any source (e.g. NASA) and process it for display on the globe. From this scriptable code base it will be easy to create a KML converter and easily update it with new versions of KML with few changes to the actual code base.

I'll leave you with an early screen shot of EarthBrowser under development:

Tuesday, July 11, 2006

The seduction of The One

As a programmer, the notion of The One is very tempting to me. Let me explain.

When designing code, you come across many different elements that have to be coordinated, manipulated and routed. Data and state information may need to be transmitted to other parts of your code, other programs on your system and sometimes even remote systems. Usually you come up with a model of how these different parts will interact with each other and you can make simplifications in the code that will enable enormous flexibility and scalability. For me it also gives me a good feeling inside knowing that I've just created a quality tool that will make the project easier in the future. I don't know much about eastern philosophy, but perhaps this is a Zen or Tao feeling of "rightness" in the code. Anyone who has spent much time programming will know this feeling.

Having an abstraction that provides a single interface from many code state sources to many state consumer destinations is something that, when done right, reduces the complexity of the code by an order of magnitude. This is "The One." A single representation of an idea that interoperates with all or most of your code making state changes nearly effortless.

However in real coding, things are never that simple. There are always problems with dependencies, synchronization and sometimes it is like trying to fit a square peg into a round hole. There is a saying attributed to Einstein along the lines of make things as simple as possible, but no simpler. This rings true again and again when coding. I have wasted countless days, weeks, even months trying to create an abstract superset of functionality that the project would just fit in nicely and have plenty of room to expand, wouldn't that be nice? To go from being an expert programmer to a master code craftsman, one must learn to avoid this pitfall at all costs. Nothing eats up more time than writing code that winds up never being used. We all throw away big blocks of code when a better replacement comes along, that is unaviodable, but in the planning stage of a project is where an over-enthusiastic programmer can really mess things up with a "simplification." There are local maxima and minima in programming and going over a little hill of work will sometimes put you in a state where things are much easier. More often however, doing a little foundation work to smooth the interface out will leave you where you started or even worse, make things more complex.

To tie this to my recent post about the shapedb format, the ability to add raster data to the shapedb is certainly nice and simplifies distribution for related data. However the need that brings rise to the shapedb format is not a convenient repository for data, but the processing overhead required in extracting and converting data into something useful from shapefiles. Now that the madness Hopefully I've just saved myself a few days of trying to make a nice "geodatabase" format that fits all sizes, I'll just focus on vector data for now.

Wednesday, July 05, 2006

Shapefiles considered harmful

There were a couple of posts about the usefulness of the shapefile going into the future. Jeff Thurston posed the question in his Moving Beyond the Shapefile post recently. A responding post by the Drkside of gis took an appropriate opposition position to the idea of using personal geodatabases as a replacement. Closed formats are no longer suitable in such an important data market. They reduce compatibility and introduce a vendor lock in on data that should, and in many cases, is in the public domain.

As a GIS file format discussion, this is near and dear to my heart. I agree wholeheartedly with the premise that the shapefile's days are and should be numbered. The question is, who will be big enough to put out a competitor? The .shp, .dbx and .shx troika to describe a single set of data reeks of '80s design. A single unit of data should reside in a single file, the complexities of opening 3 files and coordinating the shared representation of data between them would be a non-starter if it were designed today. I understand the convenience of de-coupling the index and attributes from the data when adding records to a shapefile (or should I say shapefiles) since you can just write to the end of each file rather than shuffle things around in a single file. Sorry, but that just isn't a good enough excuse anymore.

This isn't just a helpless rant about a data format that has outlived it's time. I am proposing a real alternative, a non-closed geodatabase, which I will call a "shapedb" that will work with the open source database Sqlite. If you haven't seen it yet, it is a cross platform, open source embedded database engine. It can be built into your commercial or non-commercial product with it's very non-restrictive license. I have been using it in EarthBrowser 2.8 onward and am relying on it heavily in version 3. I can't tell you how easy it makes things to have an embedded database. The ability to manipulate data with SQL statements to extract just the information you need is a quantum leap from the old dumb file format. Sqlite files are cross platform compatible so those endian issues between Motorola and Intel aren't an issue (believe me, that's a big issue usually, even for shapefiles). You can add as much data as you like to the file and index it however you like. Not only that, you can have raster data as well as vector data, gml data, kml data or whatever your requirements are.

Just saying put it in a cross platform database file and all your problems are solved isn't really a complete solution to the problem. There must be standards for how the data is organized and formatted, just like the shapefile. I propose a small group of GIS programmers and a user or two (or perhaps just me if nobody is interested) provide a standard template for each data type. As the simplest example I can think of just to illustrate the point. How about something like this:

Tables describing shapefile, shape objects and attributes:

create table shp (type integer,
xmin real, ymin real, zmin real, mmin real,
xmax real, ymax real, zmax real, mmax real);

create table shp_atts(id integer primary key,
...user defined attributes...);

create table shp_object (id integer primary key,
atts_id integer, shp_order integer,
shp_type integer, nvertices integer,
xmin real, ymin real, zmin real, mmin real,
xmax real, ymax real, zmax real, mmax real,
vertices blob);


The need for a shx file can be eliminated with the shp_order field, just use a select with an "order by shp_order". Another problem that I can't stand is removed as well. Each shape object in a shapefile has to have a corresponding entry in the dbx attributes, which is an ugly redundancy for me. Also you have to manually group shapes by checking against a key in the attributes that is not known beforehand and is different for each shapefile. With the atts_id field, you can group all shapes that are common to a particular entity. You could do a nice query like the following:

select * from shp_object as o, shp_atts as a where
a.state='OR' and a.id=o.atts_id order by shp_order;


You now have the vector outline of Oregon. Amazingly simple!
You can run with that idea and say you put in an attribute as to whether the vector is on the shoreline and all of a sudden you can do something like:

select * from shp_object as o, shp_atts as a where
a.state='OR' and a.shoreline=0 and a.id=o.atts_id
order by shp_order;

Now you have the outline of Oregon without the shoreline portion.

With the right configuration of attributes, you could even put multiple shapefiles into one shapedb, in fact that would very simple and make a lot of sense. Not only that, you could include many different raster formats in the same shapedb with your shape objects. That would create a neat little package to distribute a set of data that belongs together anyway. No more unpacking zip and tar files and getting the file paths correct. Just dump it all into one shapedb and send it out to your customers.

The raster format could be even more simple. You could just have an identifier, format information (like 'image/jpg') and an image blob. Why not just throw the well-known-text projection information into a field as well, or some gml data. The problem with many image formats is the all too common restriction in decompression libraries for the data to reside in a file. For problems like this, you could just dump the data to a temp file which is deleted upon completion of the operation. Using the shapedb format for a 1GB MrSid or ECW file would probably not be the best use of the format anyway for performance reasons. However a set of relatively small tif, png, jpg, jp2 files (a few megabytes each) would work fine, you could include multiple resolution levels, a system of image tiles or whatever application specific use you can think of. However one of the pre-defined table templates should be adhered to if you wanted the shapedb to be compatible with other applications using the shapedb format.

An important consideration is how easy it would be to convert current shapefiles to a shapedb. The short answer is that it would be almost trivial. The long answer is depending on how complex you decided to make the table setup, it could require a little more logic. Just a straight read of the .shp, .dbx and .shx file and inserting each shape object and attribute list into the respective tables, using the ordering of the shx file would do the trick. You could get slightly more tricky and collapse identical dbx records so they were unique and then index off of those in the shp_object table which would improve the speed of your select statements.

So in summary I propose a new format, called shapedb, as a new open format for interchange of GIS data which:
- based on the sqlite database file format
- can be shared cross platform
- store data from many shapefiles simultaneously
- store multiple raster files
- store application/vendor specific data
- data can be accessed and operated upon with SQL statements
- new formats possible by conforming to a table structure "template"

I am currently using this as my data format for EarthBrowser v3 and am considering spinning off an open source library to support the format in other applications.

Let me know what you think!

Tuesday, March 14, 2006

NASA confirms global warming

Satellite surveys of Greenland show melting of the ice sheets is accelerating. What is unique is that NASA is now confirming that this is being caused by global warming.

In the press release, NASA states:
"If the trends we're seeing continue and climate warming continues as predicted, the polar ice sheets could change dramatically. The Greenland ice sheet could be facing an irreversible decline by the end of the century."


Meltwater flowing into a moulin in the Greenland ice sheet.
Photo by Roger J. Braithwaite, The University of Manchester, UK.




Not to be alarmist, but we're in deep trouble in the next century.

It is beyond time for us all to take out heads out of the sand and do something positive to help mitigate the effects of this unavoidable catastrophe. At least in your own personal lives, please do something to reduce your reliance on fossil fuels, purchase fluorescent lights for your home (you can find them for about $1 each), bicycle to work, start a garden, vote Democrat.

New version of CosmoSaver

I've just released CosmoSaver 1.52 for anyone who may be reading this little blog.

CosmoSaver is a 3D screen saver that tours our solar system and 29 of the major moons. All of the planet and moon positions are within 1 arcsecond of accuracy. The rotations are supposed to be accurate too, at least they are for the earth, for the many moons does it really matter?

You can try out a free demo version at cosmosaver.com

I am planning on incorporating the solar system into the next version of EarthBrowser with much higher resolution versions of earth, mars and the moon. If I am a programming god, I'll make a cool sun model with flares and everything, but that is yet to be seen.

Sunday, March 12, 2006

Is EarthBrowser a mashup?

EarthBrowser takes data from disparate sources and displays them on a 3D globe. I'm not sure what the exact definition of a "mashup" is, if there even is one. I think it is something like a service that draws together one or more sources of data and uses a mapping API to display them. Does the definition of a newly coined word like that really matter, or is it the underlying concept that is being labeled what is really important.

EarthBrowser certainly doesn't use an external mapping api, however it's internal program structure provides it's api. Are those KML files created independently by people defined as a mashup? My guess is they are not, but why not? Does a mashup require display on a web page, i.e. a web browser as a display mechanism? That seems overly restrictive. Perhaps it requires the logic being applied to the data through server scripts preparing it for final display. Google Earth with KML generally just displays static data, but it certainly can allow that through KML via a network link. Perhaps that is why so many in the geospatial community are excited about the network link in KML. It allows Google Earth to display live data, albeit in a pretty hacked up way.

I submit that the current definition of a "mashup" is a somewhat misleading goalpost for the amateur GIS community. The underlying concept of displaying location based information is much broader than what is contained in the current crop of mapping APIs. Also the restriction of server based processing of data is one that isn't an inherent restriction, there just aren't any tools to do general geoprocessing on the client side. Yet.

Update: I forgot about ArcExplorer which is supposed to have a pretty decent set of client side geoprocessing tools. That's what I get for trying to post when looking after a toddler and 6 week old...

Wednesday, February 01, 2006

What's the big deal about KML format?

Looking over some different blog posts about the wonders of the KML format, I am left scratching my head as to what is so great about it. Certainly, it is nice to have a way to specify certain geospatial features, camera values, overlays, etc.. along with network links to other information in the same file in XML format. But each of these features are pretty obvious in themselves.

It's always the user interface that is the make or break issue of any software application. KML is pretty good at specifying what you are going to see and from where you will be looking. However, there is a disconnect in how this information is used by the underlying graphics engine and how those files and their contents can be managed and manipulated by the interface. The My Places/Temporary Places link area is very klunky and limited in my opinion.

As a format, GML is more precise and complete and would be a better basis for a feature format. One is a very nice number, having all of the information you need in a single file is very compelling from a standpoint of simplicity. The network link allows data from various sources to be cobbled together, very powerful, just like HTML. Could this be improved upon?

The idea I have for EarthBrowser is of a self contained data package that contains different elements such as features, camera settings, overlays, image files of various formats, shapefiles, engine scripts in separate files (or together if it makes sense) all packaged together in a zip file (or my favorite a bzip2'ed tar file). This reduces the number of network connections needed, consolidates the data so you won't have a situation where a critical part of the data is missing due to a download failure and eliminates the program states where only partial data is available. The engine script can use the a pre-existing user interface system to decide how it will display itself in the interface. For very large data packages, it becomes feasable to share them as a .torrent file amongst other EarthBrowser users, reducing the server load for popular packages and improving availability for end-users. One negative is that since you need the whole package, there is no way to display information as it arrives like a web page. However for 3D environments it may not make much sense, or be possible, to display partial information like HTML.

WMAP high resolution cosmic background imagery

The Wilkinsin Microwave Anisotropy Probe (WMAP) has some spectacular new imagery of the cosmic background radiation. The cosmic background radiation is the leftover "glow" of energy from the big bang. The lighter colors represent hotter areas in the background of the visible universe.




WMAP microwave sky image




A spherical mapping would be a nice background for EarthBrowser


Images courtesy of the NASA/WMAP Science Team

There are a number of microwave bands that could be used (K, Ka, Q, V, W). Now I just need to find the time to reproject them and, the hard part, put in a selection interface. Things like this will be so much easier to add to version 3.

Thanks to ccablog for the find!

Friday, January 20, 2006

More on Microsoft's geometry clipmap patent

Here is the text of the geometry clipmap patent application titled "Terrain rendering using nested regular grids."

The researchers who have developed this idea (Losasso & Hoppe) published an article on their technique in NVidia's "GPU GEMS 2." That seems like a pretty lame thing to do, given that they applied for the patent in early 2004. I've come across a discussion thread at gamedev where the general consensus is that it is a defensive patent and they wouldn't defend it against others who would use it. While this may be the case for most games developed, the virtual globe market is red hot and I would assume that they would use this against Google, ESRI and any other virtual globe product out there that uses it to give their offering an edge.

Thursday, January 19, 2006

Current EarthBrowser status

EarthBrowser version 2.9 is coming out soon. I am abandoning CodeWarrior as a development environment. It was a lifesaver in the 90s, but with XCode and Visual Studio, I understand why Motorola has them focusing on embedded developers.

I got a new iMac Intel (is that what they are calling them?) and have been preparing the Universal Binary release. Version 2.9 will have some major speed boosts due to better hardware optimizations. It will also feature a completely new database engine which is faster, portable and open source. Apple has expressed interest in distributing EarthBrowser as part of their school software bundle program, but we haven't signed any contracts yet.

It's like running through water to be developing the OpenGL based version 3 and have to keep going back and updating the old code base. At some point I hope I can focus on the new version exclusively. Speaking of the new version, I had YAMS (yet another major setback) on that front. I had been trying to develop my own clipmap and geometry clipmap classes for the past few months. After googling for some more recent info on the geometry clipmap, I came upon Microsoft's patent application for the method. Very disappointing, but I'm certain they will get granted many of their claims. I can't take the risk of implementing patented code, so guess I'll be moving to geomipmaps, they are simpler anyway. All those guys over at gamedev.net who are putting it into their games should be made aware of it, but I am not a member there, just a lurker.

I've been working on version 3 off and on for the past year and a half and the foundation is very stable, scriptable and extensible. Once my second son is safely getting accustomed to this world after a few weeks, I hope to dig in and really bring the version 2 and 3 functionality closer together.

Wednesday, January 18, 2006

Blog Purpose

I envision the Earth Browser blog as a place to discuss issues and ideas about software development, earth science, GIS and social implications of all of the above.

I produce a software product called EarthBrowser that I have been developing since 1996 in graduate school. It first went on sale in 1998 as a Macintosh shareware product called Planet Earth. It is, as far as I know, the first commercially available product to display dynamically updated earth data (just clouds).

EarthBrowser now has many competitors, most notably Google Earth (formerly Keyhole), ESRI has just put out ArcGIS Exporer and Microsoft is rumored to be putting out it's own 3D earth product. Each of these products will have major infrastructure support and are being given away for free. Even with these challenges I believe that EarthBrowser will flourish due to it's superior user experience, relevant real-time data, excellent support and focused, dedicated and small (just me) development staff.

Welcome to the Earth Browser blog.