Sensor Technology

This Week

 Before I get to the topic of this blog entry, sensors, I wanted to mention that I have been privileged to play a small role in the Government 2.0 Expo, www.gov2expo.com, occurring Tuesday, September 8th, and the  Government 2.0 Summit, www.gov2summit.com, occurring Wednesday and Thursday, all at the DC Convention Center.

 The fact that so many people around the country are interested in experimenting with 2.0 technologies to improve the way Government interfaces to its external and internal stakeholders and in a fundamental fashion rethink how it should operate, is wonderful.

 Democracies only work well when there is vigorous debate and participation in the public square.  I encourage anyone who reads this blog, recently calculated in the ten’s of viewers, to access these web sites and get active in future such activities.

 Sensors

One of the ways to look at the development of Information Technology is the increased capabilities of fast computers, fast networks and fast sensors.

fast venn diagram a

 

It is my contention that much of what we consider the recent revolution in social networking and the increasing impacts on organizations is a result of the maturing of the first two: fast computers and fast networks.

In particular, the impact of having broadband capabilities at the end-points of the network was the tipping point for this revolution. The term broadband refers to the ability to transmit a ‘broad’ range of frequencies of communications at once. The net impact is that the transmission speeds possible goes up a great deal when broadband communication is used.

Broadband allows high-speed uploads and more importantly downloads from the Internet, including streaming audio or video. End-points of the network, which started out as desktop computers and moved to laptop computers, now includes personal digital assistants, PDA’s, and cell phones in general.

When we have ubiquitous, fast, intelligent sensors distributed throughout a network, there are two very interesting developments that occur.

First, the way a sensor participates in an architecture changes. Historically sensors were passive participants in an implementation. One sent instructions to the sensor indicating what to measure or what to do. The sensor sent back information that it collected and/or information on what it was doing.

Now however, the sensor becomes an active agent. Using rules based approaches or some other more robust version of artificial intelligence, the sensor not only measures it also makes decisions and acts. To the extent that this has been true in a limited sense, we now start to approach behavior that starts to mimic Turing-quality; making it hard to distinguish whether there is a person reacting or a ‘non-person’ acting.

Listening to my in-laws talk to their GPS is only the tip of this particular iceberg.

Second, we are able to generate much more robust simulations; real-time simulations based on real-time data. When there is a serious car accident or a bridge collapses, we will be able to simulate the traffic implications and react to those simulations with data based not only on historical data but updated and enhanced by the real-time traffic data being collected after the event occurred.

So What Does This All Mean

I will talk about all of this in greater detail in future blog entries, but in this entry wanted to note one specific implication, the blurring of the division between real and artificial environments.

With sensors pulling in real-time data in real-time and the increasing ability to utilize this data in real-time, it will not be obvious as to whether we are touching the real-environment experiencing the real-data first hand or a virtual-environment experiencing the real-data virtually. In fact, it becomes less clear as to whether the division even makes sense anymore.

If we were in an office together, I would now slap my hand against my desk or on a table. I would note that in fact my hand is not ‘feeling’ the table, rather my brain is interpreting the sensations that are generated when my hand hits the table which is understood to be the same sensations one would feel if a hand were slapped on a table.

We can already experience in visual and auditory form and soon physically, it is possible to replicate all of that in a simulation. Perhaps a future generation of ‘Wi’ will allow us to physically touch that hula-hoop one can exercise with on the screen.

When a drone in Afghanistan is operated by a pilot based in the United States, what is the reality and what the virtual environment from the perspective of the pilot? How far are we from having a surgeon in St. Louis operate on a patient in New York?

When scientists have studied younger people, those generation Y digital natives; they have found that they look at themselves, news, information, and, in fact, reality differently than people who did not grow surrounded by the 7×24, always on, always available, easily editable Internet.

While these developments have changed all of our lives in many ways, the digital native’s relationship to all of their external environment is different in fundamental ways than, for example, mine who grew up with black and white TV’s and Peter Pan appearing once a year on TV (also, a topic for a later blog entry).

I would contend that the wide-distribution of sensors which will lead to the rapid development and integration of virtual environments, will cause the next generation of the next generation to look at reality even more radically differently with potentially dramatic social consequences.