2012-05-10

 

Draft Post on Self-Driving Cars

I am writing this quickly, so expect it to undergo some edits and updates later.

Google's self-driving cars have been in the news recently (I'll add a link later). While the AI involved in making a car capable of driving itself among human drivers is impressive, it's only the very first part of the puzzle and, ultimately, should only be an emergency fallback. Here's a vision of a future of self-driving cars based on operations research, successful business models, and government-industry partnership.

Consider a platform car. This has been discussed elsewhere (need link), but the basic idea is that the wheels, gearing (if any), motors, etc. are all built into a platform with a (nearly) flat surface on which a passenger compartment can be placed. The compartment sits securely on the platform and control signals for the resulting complete vehicle pass through an interface between the compartment and the platform. The power source can either be part of the platform or part of the compartment, depending on desired economics. For now, let's assume the power source is part of the platform rather than the passenger compartment, and the owner of the platform (who probably isn't the owner of the compartment) is responsible for keeping it fueled. The opposite results in a different economic model which is probably less vulnerable to monopolistic abuse, but it's more difficult to discuss.

Now imagine a world in which everyone who would otherwise own a car actually owns a passenger compartment. They are responsible for keeping it fueled, but the platform on which it is sitting (say, in they garage) is rented rather than owned. More to the point, it's interchangeable.

A typical commute might involve driving to a local commuting hub (perhaps equivalent to a highway entrance) at which the passenger compartment is removed from its current platform and attached to a platform that is driven by either a centrally coordinated server or by a peer-to-peer federated server running on all the platforms. The new vehicle automatically merges into high-speed traffic, other cars automatically making room for it. The traffic can move at speeds that human drivers couldn't be trusted to navigate safely, in large part because the system is highly predictable.

The passengers will have programmed their destination into their compartments, thus transitioning to another highway will be automated as well, and the passengers won't have to worry about missing their exit. If the next roadway is owned by the same entity (company or gov't), or the owners have appropriate agreements and technical compatibility in place, the vehicle won't even require a platform change.

When the vehicle nears the passengers' destination(s), they will be alerted (they might have dozed off, after all). They will have to verify that they are prepared to take control of the vehicle and, when it exits the highway for surface roads, it will be placed on another interchangeable platform. The passengers (or, rather, the driver among the passengers) will drive to their destination.

This vision has a variety of advantages, not least of which is that it can be transitioned gradually. The only government mandate required is that all new cars conform to the compartment/platform standards. Automated roadways can spring up wherever it is economically feasible, and can be funded/managed by private or public concerns as appropriate. I've glossed over fueling and payments in this discussion, not to mention how it interacts with trucking, rail, and even air travel, but that's why this is a draft. I'll get to it. In the meantime, take this as a sketchy vision of a viable future of improved commuting and ground transportation.


2005-11-10

 

Why MS SQL Server is preferable to Oracle

Required background: RDBMS:1, ADO.NET:1

MS SQL Server (hereafter referred to as MSSQL) only runs on Windows, which means it shares the scalability limitations of Windows. It certainly isn't the best at utilizing the hardware resources at its disposal. Its bundled "Enterprise Manager" is mediocre at best (this may have improved in 2005, but it just came out and I haven't tried it). It doesn't even have an interactive SQL commandline tool.

Oracle runs on everything from Windows to Linux to Solaris. It scales from a single-processor workstation box to a cluster of multiprocessor Sun big iron servers. It doesn't really have bundled management tools, but there are lots of powerful third-party tools out there. It does have an interactive SQL commandline tool, and it squeezes performance out of hardware. It is astoundingly tunable, allowing a trained DBA to squeeze even greater performance out of it. So why would anyone prefer MSSQL to the powerhouse that is Oracle?

Of course, I'm deliberately neglecting the most important part: ease of use. What I mean by that is ease of use in development, rather than ease of deployment or administration. While it's valuable to be able to get the most out of hardware, and to be able to improve performance by adding hardware, it does no good if you don't have the applications to work with the data. As a database designer and developer who has worked with both Oracle and MSSQL, I can comfortably say that Oracle is a pain. As I said, I'm not talking about deployment or administration, I'm talking about design and development of DB-backed applications.

Consider this table (the relevant columns are .NET Framework, SQL Server, and Oracle). The MSSQL column types map pretty directly to the .NET types. Java or even C/C++ can be substituted for .NET, though it's unsurprising that MS designed .NET to integrate well with MSSQL. Oracle, on the other hand, doesn't even have a Boolean column type. Or an integer column type, actually, since with ODP.NET (Oracle's own ADO.NET driver) one must retrieve any numerical value (and INT is a synonym for NUMBER(10) in Oracle) as a .NET decimal.

From a DB designer's perspective, not having a Boolean column type is crippling. It is always preferable to have tables with column names and types that are as self-documenting as possible. Having to choose either NUMBER(1) or CHAR(1) to represent a Boolean value defeats that. First off, it's easy to choose inconsistently as enhancement and maintenance proceed through the years and through many hands. This choice can and should be documented somewhere along with other conventions of the system, but such external documents are often ignored or even lost. Second, what values represent true and false? Is it one and zero for a NUMBER(1)? Non-zero and zero? Positive and negative? Null and non-null (heaven forbid, and what if you actually need a null)? How about for CHAR(1)? 'Y' and 'N'? 'T' and 'F'? How about 'y' and 'n' or 't' and 'f'? '1' and '0'? Non-null and null (same issues as with NUMBER(1))? Sure, you document it and, ideally, wrap it behind a data access layer in code (which is what I did when I wound up working with an Oracle DB), but someone will eventually mess up ("I used the COUNT for the value in that stored procedure." "But we use positive and negative to represent true and false." "D'oh!") and cause obscure bugs that are difficult to track down.

From a developer's perspective, there's an important distinction between monetary or arbitrary precision values (decimal), integer values, and floating point values (double). Having these all represented in the database as NUMBER makes code harder to understand and, therefore, harder to enhance and maintain. Similar issues apply to the considerations about Booleans above.

I don't feel the need to cover integration of transactions with code (MSSQL has it, not sure if Oracle does), quality of errors messages (Oracle's are abysmal, MSSQL's are decent), or debugging integration (VS.NET has it with MSSQL, I don't know of any integration with Oracle). I also won't discuss the relative merits of PL/SQL (Oracle's procedural SQL extensions for stored procedures and such) and T-SQL (MSSQL's equivalent) other than to mention the ease with which one can return datasets (i.e. sets of rows just like an ordinary query) from a stored procedure in MSSQL and the hoops one must jump through to do the same in Oracle. (I've done it. It ain't pretty, and it requires still more of those externally documented conventions.) The lesson from all this is that MSSQL deserves a certain popularity due to its friendliness to developers. Likewise, Oracle's hostility to developers should give one pause before choosing it as part of a system solution.

Oracle's reputation for performance has kept it going for many years, but that same success has led them to concentrate their efforts on performance rather than on making it a better development platform. Java stored procedures are nice, sure, but .NET stored procedures are just as nice (supported in MSSQL 2005) and both .NET and Java have data types Oracle doesn't support. Of course, I would prefer to work with either DB2 or PostgreSQL over Oracle or MSSQL any day.


2005-11-04

 

The problem with upcoming video codecs

Required background: Video codecs:1, Algorithms:1

An important aspect of computer algorithms is resource tradeoffs. In general, algorithms are about processing input data to produce output data. The obvious resources are the time/effort it takes to process, the accuracy of the output data, and the storage space required for the input data, the running algorithm, and the output data. Of course, tradeoffs only make sense in the context of comparing algorithms.

Consider the simple example of two sorting algorithms, merge sort and quicksort. Merge sort requires twice the runtime storage that quicksort does. On the other hand, it produces slightly more accurate results in that it is "stable" (i.e. the input order of items with equal key values is preserved in the output). Mergesort also has a guaranteed running time whereas quicksort's running time is data-dependent.

One chooses an algorithm based on what tradeoffs are acceptable/desirable for the particular application. If sorting stability is important in a given application, quicksort's tradeoffs are not acceptable. If sorting stability is not a requirement but the data to be sorted is nearly as large as available storage, merge sort's tradeoffs are not acceptable. This brings us to codecs in general and video codecs in particular.

Even more specifically, let's consider DVDs. The audio and video on a DVD is encoded as MPEG-2 (we'll ignore the encryption scheme for now). It is a lossy compression codec, meaning the accuracy of the output is guaranteed to be lower than the input. In fact, the encoding algorithm has a tunable tradeoff between accuracy of output and size of output.

For encoding, the time/effort, input storage, and runtime storage it takes are largely irrelevant since it is done once to prepare the DVD master and never again. Decoding is constrained by the available storage on a DVD (though, certainly, many movies are released to DVD with supplemental discs containing extras and such, it is pretty important that the movie itself fits on a single disc) and the processing power that can be made available in a consumer electronics component (the more processing power that is required, the higher the price of the device, not to mention power consumption and heat dissipation, thus economic realities set limits on the processing power available for decoding). The MPEG2 codec (and the DVD media format) had to be developed to have tradeoffs that satisfied these constraints.

Economic constraints change, however, and they have. The DVD media format is showing its age and there are new formats on the horizon (HD-DVD and Blu-Ray, in particular). Furthermore, there is reason to believe that in the near future video will be stored on hard drives rather than optical discs and will be delivered there over an internet connection. Typical bandwidth to the American home isn't up for speedy delivery of even MPEG2-encoded video right now, but this can be expected to improve as WiMax networks, fiber to the home, and other technologies (maybe broadband over power lines) become widely available. Meanwhile, though the cost of processing power has gone down (as it always does), there is increasing desire to view video on battery-operated devices (e.g. video iPod, cellphones, laptops, portable all-in-one DVD players). Processing power translates directly into electrical power consumption, and portable energy density isn't improving quickly. Storage, however, even storage that can be run energy efficiently (e.g. flash memory rather than 7200 or 5400 RPM hard disks), is getting cheaper and larger.

This points to a need for a different set of tradeoffs. The accuracy of MPEG2-encoded video is almost certainly sufficient for portable applications, but there is less need to restrict the encoded size of the video and more need to restrict the processing requirements for decoding it. This does not seem to be the direction of video codecs, however.

The newer video codecs, such as MPEG4, H.264 (which is part 10 of MPEG4), and Windows Media (which also involves part of MPEG4), are all about better output accuracy with lower output storage at the expense of higher processing requirements for decoding (as compared to MPEG2). This makes sense when considering yesterday's economic realities, but not tomorrow's. The video iPod plays H.264 videos. It's battery life when playing videos is advertised as 2 hours, as opposed to 14 hours when just playing music. It is unsurprising that decoding video takes more effort than decoding audio, and changing the LCD screen 30 times/second while keeping its backlight on certainly accounts for some of the power consumption, but should it really take seven times the power?

This isn't the only time that technology has been developed that does not serve the needs of the future or even the present, and it won't be the last. It is something to learn from, however.

Tags: , ,

2005-10-10

 

Expressing the need for background knowledge

Any technical discussion, whether about programming languages, medieval history, fine wines, politics, or whatever, requires some level of background knowledge. A general discussion of Java (the programming language), for example, would require some knowledge of object-oriented programming/design and some knowledge of C syntax. In most settings, it can be simply assumed that the readers/participants have appropriate background. This is true of textbooks and courses in particular.

On the web, however, one can wander pretty far from one's area of expertise (in a series of logical clicked links, of course). It would be helpful if there were a good way of telling potential readers/participants what they need to understand to get anything out of the page or site at which they have arrived. It's possible to have a few preface sentences, but most people's eyes will skip right over it and the core readers/participants will probably find it annoying. There must be a better way.

Let's phrase the issue clearly: How do we inform readers on the web of the topics and depth of background knowledge that are assumed by a particular page or site? The important components are topics and depth. My humble proposal is that we choose Wikipedia topics and express the required depth of knowledge as link depth.

Consider an example article about a supernova. To understand this article, one must have a good grasp of astronomy in general, maybe a little astrophysics, and understanding of stellar evolution, and some knowledge of supernovae in general. To express this required background, we place the following line at the beginning of our article (much like a Technorati tag):

Required background: Astronomy:2

This indicates that the reader should be familiar with the material within two clicks from the Astronomy page on Wikipedia. Arguably, since the stellar evolution page links to both Astronomy and Supernova, we could use "Stellar evolution:1" but as a matter of taste we might decide that since the article is primarily about a particular supernova, we'd prefer "Supernova:2" to get essentially the same coverage.

After all, putting articles and discussions on the web is not (in general) intended to scare off other people, but to draw them in. Regardless of which topic(s) are used in a required background tag, it is effective as long as the reader/participant can learn enough from the entries within the stated depth to get something out of reading or contribute something by participating. I will be using this convention in this blog's posts.

Of course, there is nothing special about Wikipedia. It's a well-linked knowledge base with good (not perfect) coverage of human knowledge, but any link could be used. Indeed, a sufficiently specific term or set of terms could be linked to a Google search for them just as effectively (e.g. Stellar evolution:1).


This page is powered by Blogger. Isn't yours?