StuffDreams

2007-12-15

Green Living... The fourth R: Retain

In the infamous study comparing the life cycle costs of a prius versus a HUMMER, there is a grain of truth hidden within the mis-direction: The longer you keep something, the more the cost of building it is amortised. So if you have a reasonably fuel efficient vehicle,what can you do to reduce your impact on the environment? the answer should be blindingly obvious: keep it as long as possible to avoid the cost of building a new one.

For it to be environmentally worthwhile to buy something new, the improved efficiency of the new thing has to overcome the environmental cost of building the new thing in the first place. Further, just because you get rid of a car, does not mean it is no longer in use (unless you take it to the crusher.) so looking at it macro-economically, the asking price or perceived value of gas guzzling vehicles has dropped, making them cheaper to run. Making vehicles cheaper, means more people will be able to afford them, which means more people will be able to afford to run them. Replacing a five year old GMC Jimmy with a Prius just adds the Prius to the enviromental load of the planet, because someone else will keep riding the Jimmy until it dies anyways.

It would be interesting to see a study on how these balance out. It might really be true that, ecologically speaking, for who replace their econobox every two to five years might very well be polluting the planet more than someone who keeps the same pickup for15 years, but it's hard to say. If you are buying a new vehicle anyways, then by all means find something fuel efficient, but if you have a five or ten year old vehcile, it is probably better to keep it until it is unusable, for the environment's sake.

  • http://www.cnwmr.com/

2006-12-24

cars: Zero Defect Initiative - Could be Better

This initiative, at Daimler-Chrysler, is seeking to eliminate defects in software/hardware systems. It is an excellent goal, and many of the methods they espouse in the article and in a somewhat related article Focus on: Specification (Daimler-Chrysler Hightech Report 2/2006) are about increasing communications and documentation, to achieve co-ordination for apparently disparate applications. The approach, in both cases is to follow the lead of IT projects, which are the most complex sorts of projects in existence, and therefore necessarily use the leading edge in project management. It should be noted that IT projects are widely renowned for having a high failure rate. This is a result of the very high complexity of IT projects, and what has driven the development of project management techniques. Managing complexity is very difficult and expensive. In the short term, applying strong project management methodologies and discipline to problems with irreducible complexity, as expressed in the reports are likely the most effective answer.

Looking at a longer term, however, ideally one wants to reduce complexity, so that there is less of it to manage. How does one make complexity reducible? Electronics and software technology has used the same method for decades: Standardization. Standardization is mentioned as one component among many in the drive towards reliability, but the importance of this activity does not seem to be fully appreciated. It is viewed as a cost-cutter, but the importance of it as a simplifying factor in specifications and after market diagnostics is not described.

Engineers are facing ever more complex features, and the number of tests to perform is going up as a geometric function of the number of components to interconnect:
Despite simplifications in the assumptions we made, we arrived at 10180 potential test conditions for a single vehicle model. If you wanted to examine all these as a simulation, you’d have to book several decades of computing time on a supercomputer.

. . .

... Standardization instead of ad-hoc solutions.
http://www.daimlerchrysler.com/dccom/0-5-7182-1-465545-1-0-0-0-0-0-8668-7165-0-0-0-0-0-0-1.html

Using many computers with simple standard interfaces, instead of a few large ones with very complex behaviours, reduces the complexity of interactions, and simplifies testing methodologies. This is a bit analogous to using a multi-stage cross-bar switch instead of a single stage switch reduces the of connections ...

http://en.wikipedia.org/wiki/Nonblocking_minimal_spanning_switch

Individual computers control a small number of sensors, say one computer per corner of the automobile, with lights, tire rotational speed, tire pressure, brake hydraulic pressure, RADAR sensor data, video data, etc... performs basic analysis and data reduction, and then feeds standardized data towards management computers, which deal with no raw sensor data, only pre-reduced data. The corner computers have multiple redundant sensors (two or three per function) so that sensor failures are easy to detect and correct. The computers can run diagnostics on the components for which they are responsible, and report summaries back to the management computer. The management computer is communicated with via tcp/ip protocol, so that checksumming and guarantees about data integrity are improved. The management computer can interrogate and ask for self-checks from the corner computers. Again, redundant processing will mean extensive self-diagnostics built in.

Total wiring is reduced because rather than individually wiring individual subsystems, power becomes a bus that distributes to each corner, and the distribution within each corner is handled by the computers there. Reducing the total wiring reduces the number of connections to test and troubleshoot.

While such an architecture is a large change at first, once in place, very little ongoing testing is required to maintain it, and the system itself is very modular and independent of changes to individual components.

Standardization is mentioned as one element to assist in reducing defect, but instead of applying
the lessons of standardization and upward compatibility to their components to address the rapid product cycle in the electronics industry, the choice is to force the electronics groups to slow down:
In Wolfsried’s estimation, however, the benefit of such rapid development - smaller components or more power for the same size - are totally outweighed by the risks involved for dependability. Ultimately, each time the model changes, the part must be tested and approved with regard to its ability to withstand vibration and fluctuations in temperature, for example - and that’s an elaborate and costly process. “So in the future we’re going to work with only those semiconductor makers who will guarantee not only the necessary standards of reliability for the parts but also the availability of certain elements throughout the lifecycle of a vehicle.”
The problem with this approach is that that is solving the wrong problem. The five year development cycle and intensive testing requirement is the problem to address, rather than the short life cycle of the electronic components. In the computer industry, short product life cycles are normal, and asking suppliers to commit to longer product life cycles is just forcing up costs. The industry has been using custom equipment which was the norm in embedded systems because the field was so immature. In the computing industry, there have basically been two multi-vendor "standard" hardware communications interfaces used for the past 20 years: formerly RS-232/RS-422 serial, and now ethernet with RJ-45 connectors, and USB.

That is a level of hardware standardization. Clearly, physical connectors from commodity computing cannot be used without modification for the harsh environment of automobiles, but hardware durability is really the only thing that is special about this environment. There is a great deal of common ground between MIL-SPEC type robustness requirements, and the automotive environment. Testing by automotive groups could concentrate on physical reliability (loose connections, corrosion, dirt, etc...)

If conectors are standardized, and computer architecture coalesces on PC-style arrangement (Intel/AMD based systems running Linux ) and in computer applications, one takes modern hardware, install modern systems software, and then can run the old software on this improved, more robust base. This is the principle behind upward compatibility.

The communications standards and other software infrastructure such as operating systems, can be taken directly from the commodity industries. Application of many more, simpler computers will simplify testing. A typically automobile would have many gumstix style computers, communicating among each other using TCP/IP over ethernet, and using a standard operating system such as linux.

With such an environment, replacing a processor from 2005 with a processor from 2010, would mean running the software appropriate for the 2005 model on the new software platform. The testing of the component is only the hardware durability, since the connectors and software will be identical. Vendors would not have to stock the five year old computer, since components from this year's model would be able to replace the Original equipment. This can only happen if the automotive industry uses operating systems and drivers which are standardized and abstracted away from the automotive applications itself. (the software to operate the vehicle's sensors and components, and the software to operate the computer which runs the application, need to be separable, so that one can change the computer without modifying the vehicle.)

So the first defect is to under-appreciate of the benefits of standardization, and the perception that one must force electronics onto a slower rate of advancement rather than capitalizing on upward compatibility to add features at ever reduced costs in a judo-like taking of the energy of a trend and channeling it for one's own advantage.

Another weakness is the near total emphasis on pre-market testing. Aftermarket diagnostics and followup are also important. Right now, computer information is typically read by the mechanic, and then reset and erased prior to returning the client. Manufacturer should consider this information as a goldmine of reliability information. Getting information about how components actually fail, coupled with GPS and weather data would improve data gathering on use of systems. Have on-board systems accumulate logs, and have maintenance personnel able to retrieve them, like odometers (say a write-once medium for diagnostics.) Even better would be to include the work done on the vehicle (parts replacements, etc...) to provide a complete log to subsequent owners as well as the constructor. Such information could be used to improve diagnostics of common problems in older cars and thereby improve the ownership experience for owners of older vehicles.

Maintaining vehicles past a million kilometers is a point of pride of the Mercedes brand. Making diagnostics and repairs cheaper and faster is something that can only be achieved by studying the failures of these vehicles in the field. There are many good reasons to do this: running vehicles for more kilometers improves the manufacturer's reputation, and feeds back into the quality process in future years, potentially reducing some of the need for long term pre-market testing. Such work also reduces the replacement rate of automobiles, which is good for the planet.

So, to increase quality over the longer term, one should look at reducing complexity though focus on use of components which are standard for their industry, with standard interfaces so that the individual components can evolve compatibly over time. One should learn about failure modes of aging components by maintaining birth to death logs of data on vehicles.

2006-12-21

The Obsolescence of Work

Here is a delightfully wrongheaded article
Scientific American: Not So Revolutionary
Which posits that recent advances are no third industrial revolution. It is utterly wrong because it fails to grasp that where we are now is no indication of where we will be in 50 years. We are still very much at the beginning of the impacts of computers & information technology, much like the steam engines of the greeks were simply toys for entertainment. Today's technology is hopelessly primitive.

The end point of information technology revolution is nothing less than the elimination of human labour. Robots are the logical marriage of the industrial revolution (age of mechanical machines) with the information revolution (age of mechanical brains.) Once we are able to put brains and brawn together in a package that can function in a real complex environment, such as on our highways, dealing effectively with people in human languages, or walking in hallways of a typical office building, applications for robots will be everywhere. It is not some other revolution that is coming, it is just the natural progression of increases in computing power and robotics itself that will lead to far more functional robots. As they get more functional and can be applied in a wider variety of situations, the market will explode, and spur further developments.
How can we see this dynamic today?

So far, the rule for robotics was always to go for jobs with the 3D's... Jobs which where too Dirty, Dull, and/or Dangerous for humans to do. Undersea, inspections of drilling rigs, welding & repairs are dangerous jobs for humans, and usually rather dull. The divers themselves often are quite happy to trade their suits for joysticks and warmth. Similarly, cleaning the inside of nuclear ractors, where the radiation is too high for humans to venture, is a long time application for robots. Painting or welding components of automobiles is now done cheaper by robots. Poking at suspected bombs in Iraq, or dust-motes in our houses (http://www.irobot.com) shows how things have progressed. A robotic vacuum cleaner is now somewhat practical.

What's next? The American Defense Advanced Research Projects Agency has asked for a large number of the military's vehicles to be self-driving by 2015. What is driving that target? Truck drivers being ambushed in towns in Iraq. If there are no truck drivers, there is no political cost to the loss of the convoy. Driving trucks in Iraq, is classic 3D work.
( http://www.grandchallenge.org/ ) Today, we have "predators" http://www.airforce-technology.com/projects/predator/ which patrol the sky, looking for things which are out of place. If it finds something, a human is alerted, and perhaps more humans are sent in, on the ground, to get better information. Those people sent in, today, would be infantry, perhaps a patrol driving in jeeps/humvees through a city to look at a certain house, a certain car, a certain man. Given an unfriendly population, this mission is going to be dangerous, the drive is going to be stressful and dull, and if conditions are like Iraq today, the troops will arrive dirty.

It would be very attractive for the army to be able to deploy an infantry version of the Predator. Walking, watching, talking robots to search for insurgents in villages. They could walk into people's houses, ask questions, etc... If the natives are restless and destroy a robot, it will not leave a grieving family or make the evening news, and a replacement will come off the line much more quickly than a human infantryman can be trained. The people operating this infantry patrol robot, could be in Des Moines, so there would be no need to ship in thousands of tonnes of food for tens of thousands of personnel in the battle area. Soldiers would complete their shift, and stop by Safeway on the way home to their families. A single human would likely be able to operate an entire patrol, the robots themselves would have software to perform analysis, and let the human know when they find something interesting.

Soldiers would thus become more like policemen, where hiring criteria would stress the ability to read people, think critically, and assemble realistic theories of what is actually going on. Instead of weapons technicians, the need would be for detectives. Friendly fire ceases to be much of a problem if the battlefield is automated. But full scale battles are unlikely in the future, we will see more "engagements" like Yugoslavia, Rwanda, Ethiopia, Somalia, Afghanistan, Iraq, Lebanon where the key is to patrol at street level, find the "bad" people, and deal with them and only them. A rich country could field a million robots, where it could only field 10,000 heavily trained volunteer soldiers. Hiring to work shiftwork in Des Moines will likely be a lot easier than hiring for today's infantry. The advantages are so overwhelming that, as soon as such a thing becomes even vaguely practical it will happen. Once it happens, it will become more common.

Forgetting Dangerous, lets just go to "Dull" & "Dirty". People everywhere are getting older. Would elderly people want live in orderlies who help them bathe, cook for them, remind them and guide them into taking their medications, see when they are in distress, and will never take advantage of them in any way? Basically, such robotic help would permit people to be taken care of in their own homes as an alternative to "assisted living" communities. Those communities are very expensive, and so would provide the upper bound on cost for robotic help. I suspect that the market will be absolutely huge across the entire developed world.

Take driving a cab. Please. It will no longer exist. With vehicles like Stanley (see grandchallenge.org) the taxi will be automated. Not having a human to support, the cost of taxi service will plummet. With cars able to communicate with eachother, they could hook up together in trains on the highways to improve density, reduce fuel consumption, and you could work or play on the way to wherever you are trying to go. The community automobile will become far more practical since instead of reserved parking spots, the car will simply pick you up at the appointed time, and park itself somewhere or operate as a taxi until the next reservation. Private cars will drive their occupants to work. If you know your car will pick you up after work, would you mind if it acted as a taxi during the day, reducing your cost of owning the vehicle, saving you the cost of parking, and reducing the total number of vehicles on the road?

In this month's Scientific American cover story, Bill Gates posits A Robot in Every Home. The article itself is a well crafted advertisement for products which attempt to establish the same sort of rental income for robots, that has been so successful in personal computers. The examples and illustrations are about what is available now, and what will become available in the next few years. The article is a passable tour of the current ferment in robotics as a field. We are just beginning to have robots which can see, touch, understand their surroundings, understand human faces, and on, and on.

It is hard to say exactly when the technology to make robotics explode into practical life will happen, but it is virtually certain that all of the above applications will be addressed in some manner. The current trend is clear. Mechanical machines are slowly integrating with electronic brains to produce artifacts that will be able to perform virtually any sort of physical work which can be performed by a human. At the very worst it will take a century. My bet is that self-driving cars will be common within twenty years, but it may take another generation before their full potential is explored.

The end result of the Industrial revolution was for the number of workers in agriculture to drop from 90% of the population at the outset, to something under 5% today. The end result of the information revolution will be to slowly eliminate all forms of labour from our society. I cannot say whether this is desirable or not, but it is, based on clear technlogical and obvious economic grounds, inevitable. How one can conclude that this revolution is less disruptive than previous ones is to misunterstand the scope of changes underway today to a tragicomical degree.

2005-11-09

Reduce automobile fuel consumption by 25%.

The government should mandate that all new motor vehicles sold after a certain date have continuously visible fuel consumption gauges in the same way that they have speedometers and odometers. That is, an indicator of the following would always be visible in the dash:
  • instantaneous fuel consumption (right now.)
  • fuel consumption tied to the trip counter (what´s my mileage since I hit reset.)
  • overall fuel consumption tied to the odometer (life of the car.)
  • tire pressure warnings about low or high pressure in the dash.
It is estimated that improper tire inflation pressure make you use between 3 and 5% more fuel. Jacques Duval, a car expert in Québec, recently performed a media demonstration of instantaneous fuel consumption as a way of making people sensitive to how their driving habits can cost an additional 20% to 30% in fuel consumption. If drivers always see their mileage in real-time, it will train them to adjust their habits. Just saying it is not enough, folks need the continuous re-enforcement that comes from data visible in real-time.

All modern automobiles use fuel injection under computer control. It is trivial to extract fuel consumption from any such system. Correlation with the odometer information to derive fuel consumption is equally trivial. The hard part is the ergonomics of displaying the data on the dash. This was obvious from two aspects of the reportage. First, the automobile used in the test had an onboard computer with such display options, and it was a modest car (a Chevrolet), the problem was that there was much pushing of buttons to have the appropriate fuel consumption be displayed on the dash. A constant display would be far better.

In any event, the cost of implementing this feature to automobile manufacturers will likely be negligeable. The use of a three year time span allows the manufacturers to incorporate the new requirement into their next review of their product lines, further minimizing implementation cost.

Automated tire pressure monitoring systems are already present on some luxury models. Monitoring of tire pressure is a safety concern, as well as one for the environment. Cars with incorrect pressure can show poor manoeuvrability and traction as well as the fuel consumption penalty. Verification of tire pressure, is a relatively time consuming and oft neglected chore. For those with poor mobility, such as the elderly, or those who will not normally perform such tasks (such as my wife, who does not want to touch the dirty wheels or crouch beside the car in the winter with the slush.) automated monitoring is a boon. Admittedly, this requirement might add cost to automobiles. One hopes that as it becomes a mass market item, the additional cost would be minimal.

From Achilles Michaud's report, a Ford Focus produces 3.7 tonnes per year of CO, while a Ford Explorer produces 7 tonnes. If we take an average vehicle at being 5 tonnes per year, then a twenty percent savings from these two measures would meet the 'one tonne challenge' all on it's own. The goal of this suggestion is to ensure that drivers have real data on which to make continuous decisions, and promote real progress towards meeting Kyoto targets in a sustainable fashion.


The followup to Kyoto is coming to Montreal: http://unfccc.int/meetings/cop_11/items/3394.php

The Jacques Duval & Achilles Michaud media report:
http://www.radio-canada.ca/actualite/v2/tj22h/index.shtml

The one tonne challenge:
http://www.climatechange.gc.ca/onetonne/english/

2005-11-05

Bandwidth is a utility.

Telephone used to be analog and application specific. Television used to be analog and application specific.
By application specific, I mean that your phone never used to serve web pages. and you never expected to order pizza over your cable television. Both technologies are now fully digital. Beyond digital, both technologies are now TCP/IP based (use packets, and the protocols that underlie the internet.) Once things are digital, they are no longer application specific. It makes no sense to have a network for a single application (phone is a single application, as is cable television)

Technology is improving at a rapid rate, and in the next few years, the phone companies will be rolling out fibre to the home. They will do that because cable companies have a much higher bandwidth medium (coaxial cable) to work with, and can offer to replace phone service for cheaper on their cables. In contrast, traditional copper cabling cannot include nearly as much signal as coax, so the phone companies have to roll out fibre or risk being driven into oblivion by everyone cancelling their phone service and using only cable (because phone service will be much cheaper there.)

But the fact remains: With Internet technologies, you do not need to have multiple networks connected to your house. Given a choice, why would you have multiple networks running to your house? Not only is there no reason but it will be quite expensive. Over the next twenty or thirty years, places are going to lose networks. Enough people will switch away from the phone company (or cable company, once phone companies start offering cable tv over fibre) in some areas, that it will become very expensive for the phone company to provide service there, gradually overlap of the cable and phone networks will decrease, and eventually we will settle into comfortbale oligopolies, where single companies have the only network that covers large areas.

That future is almost certain to occur in the US, where internet service has been completely deregulated. Not only will you have only a single network provider, but that network provider, without competition in the local area, will be quite expensive. The natural number of physical network providers in any given area is 1. Internet bandwidth is a lot like electricity that way. Sure you can have multiple sets of power lines, but that is going to be hellishly expensive if your clients are not huddled together, hopefully near the point of generation. So now we should look at the concept of cities providing bandwidth, much like most cities provide water, or local authorities provide power (Hydro Quebec, Tennessee valley authority, etc...) That is probably the route that makes the most sense over the long term, the alternative being a bell-style regulated utility.

OK, so basic economics points to losing expensive extra networks. A basic thing that an oligarchy of private networks will want to do is packet preferencing and packet filtering. Today, I run a mail server out of my house. Most people cannot do that because their ISP agreements prevent them from running 'servers.' Anyone using internet phone service is very likely to be running a server, and very likely violating their ISP terms of service. What the ISP's want to do is sell your their own phone service, their own email service, their own web-hosting service. What people do not realize is that you can do all that in your home, for nothing. As long as networks do not do packet preferencing.

Today, it takes some geek knowledge, but there is no cost involved. The major ISP's have already killed competitive email solutons by blocking port 25 (the mail traffic port) and is fighting against providers like Skype, to try to keep their customers from being able to get voice communications from someone else. The only way that this will happen is if the networks are unregulated and permitted to continue to filter, and prioritise according to their corporate interests. Dropping voice traffic at the gateway is in the corporate interest of your cable company. It reduces load on their internet link, and enables better service for the cable company's own clients. But it is deeply wrong. It is as if Westinghouse were your power company and permitted only Westinghouse appliances to be connected to the power. GE would be out, Frigidaire too.

Clearly, what you want is a vendor neutral internet, where you can buy services from whoever you want, and even build your own if you are the DIY type. No private company will want to give you that freedom, unless there is coecion of some kind. Government regulation could do it, but competition from a city or area run non-profit with the public's interest at heart could probably force the for profit corporations to be civilized.

The real question is how can the most people get provisioned with vendor neutral bandwidth for the lowest cost to the consumer and the economy. I very much doubt that that will happen in an unregulated economy because the economics push towards a natural monopoly, and a for-profit monopoly does not drive efficiency.

Ordinary people should want standardized WLANS

Wireless communications could be a lot better if they were standard. If a large market (practically: either US, Europe, or Japan, or China) would pass a law saying 'all consumer electronic devices must communicate with ieee802.11 WLAN protocol, by the year 2015.'

Sounds like gobbledygook, right? OK, need some de-geeking...
What's a WLAN? It stands for wireless lan, and the 802.11 standard described is why you can use a wireless base station from one company with a computer from any other company. A protocol the language that is used to trade information between two computers. Today, a remote control talks to a tv using a signalling system made by the manufacturer (or some sub-contractor) to communicate with their own controller (or that made by a sub-contractor.) No other device can trade information with your TV.

A cordless phone talks to it's base station using another private signalling system, wireless weather stations, and many other thingums use their own private systems. There are government bureaucracies in most countries (FCC in the US, CRTC in Canada) which say to manufacturers: you can only transmit at such and such a frequency, at such and such a power level. The frequencies are treated as a kind of real-estate. There was a very good reason for this sort of management in most of the twentieth century. People were making radios that talked over each other, and interfered with each other. RADAR works on the same principle as broadcasting. If there was no management then you would have snow on your tv screen everytime the local airport's RADAR swept in your direction. If you think these sorts of problems are imaginary, read this brief account of keyless car locks going nuts near military bases. These sorts of problems result from folks thinking that low power short range communications should not interfere with anything. What nobody counted on was that RADARS and jamming equipment are, by their very nature, very powerful transmitters, and can over power low power transmitters that are very far away.
This sort of problem happens because the devices in question are very simple.
when computers communicate, there is a famous (ok.. famous among geeks) seven layer model ( http://www.freesoft.org/CIE/Topics/15.htm ) There is a physical link layer, which could be over radio waves, over wires, glass fibre, or whatever. Standard electronics for the medium takes care of actually sending and receiving signals. Another basic concept that shows up at this level is 'packets'. Packets are messages limited in size by the medium being used to send them. Packets are usually quite small, they have a clear radio signature at the beginning, a clear radio signature at the end, and the middle has some rules, about how you store information in them: say the first part of the packet should say how long it is, and at the end you will often have a check sum, which is a simply check to see if the packet got clobbered in transit. If any of the rules (start tag, end tag, size should make sense in reference to the start and end, the check sum should match what we received.) are broken, then the physical link layer will normally discard the packet as invalid.

Sorry... what? ... It means that if someone points a RADAR at your car, then with early keyless entry systems, the car just reacts to a fairly simple signal. It will 'see' almost all the traffic, and eventually, just through dumb luck, random noise will be 'decoded' by the receiver, and the trunk will open because of the RADAR. Now theses things are slowly getting smarter, but they are really just re-inventing the wheel that already exist: wireless LANS. A wireless lan, on the other hand, will try to put any signal it receives through the packetization engine, and throw out almost all the random stuff as garbage.

The stuff that makes it through the packet engine, will then have to go through security mechanisms of a wireless lan. There are lots of mechanisms, but basically, the idea is that the base both the remote control, and the reception unit in the car 'know' a 'password.' They use the password to make the message unreadable. Someone who knows the password can make it readable again. So once you get a packet, wireless LAN hardware will try to make the message readable again, based on the shared secret. If it doesn't work, again, the packet is thrown out. The resistance to interference comes, not from having a really good radio, but from how the signal is structured, to make it nearly impossible for random interference to be understood as real commands from a remote that a device should listen to. The shared secret makes it reasonably hard for your neighbour to change your tv's station.

OK, but what about interference? Early cordless phones would buzz when you used them near appliances. They used a single frequency, like a radio or a television, and if an appliance made noise on the frequency, you would get buzz. After a while higher frequencies, like 900 MHz came along, and they moved to 'digital signals' (which means they make it into packets... but each maker does it their own way.) which made things a little better, but there were still problems with interference and crosstalk. So then 'Digital Spread Spectrum' came out. What's that? Well, it means that instead of using one frequency, the cordless phone and base station listen for other radios on a bunch of frequencies, and avoids interference by using (aka "hopping" to) the ones with the least noise.

So when a military RADAR scans at your car, if the car were using wireless LAN technology, the car starts discarding 99% of the packets coming on the frequency it was using, assumes there are others using it, and the remote and car switch to a less crowded one instead.

A radio that has things like frequency hopping, packets and security is what you need when there are a lot of devices sharing the same radio space, without causing gaps in conversations, or trunks of cars to open at random intervals. Common devices can share less radio space, and we can reclaim the frequencies allocated to other uses. Another benefit of this technology is that wlan radios adjust their power levels to only send at the minimum power required to communicate. Those who worry, rightly or wrongly about long term exposure to radio waves, can take comfort that smart radios will use less power, and the total amount of radio transmissions will be reduced by all devices being able to share a single base station.

Today, the hardware for a wlan interface costs on the order of 20$. This is a lot for some forms of consumer electronics (think DVD remote control), but standardization will also drive down costs, since the market for them will be vast, and no-one will have to develop company specific radio hardware, and all devices will be able to listen and talk to eachother.

So it won't cost much, it will allow us to use radio frequencies for other purposes, but what is the win for the consumer? Well, using the same system as computers means you have a gateway to computers. Anything you can send over this short range radio, can be sent to a common base station, and then sent anywhere over the internet.

OK... instead of a cordless phone with a base station for that brand of phone, all the cordless phones ( like this one... http://www.qiiq.com/products/productsQWiFiFONEXUV.htm ) will work with any wireless base station. They use Voice over Internet Protocols, and soft PBX's (like this one: http://www.asterisk.org) implemented on computers to give any home a complete industrial strength phone system. No such thing as a busy tone. Someone phones your home to talk to your teenager. Your spouse is conversing with his/her mother and you can still receive calls at the same time. There would be a few tests to pass before the phone actually rings in the house. Phone numbers of people you know would be let directly through, others would have to answer some questions first. Gone are the days of heat pump salesmen interrupting your dinner. Cell phones are basically history. People will have bandwidth available everywhere, and calls will be free.

If the car and the house speak to each other, you tell your home entertainment system to send the new disney movie to the car, so that the kids can watch it on the way to grandma's, don't have to carry anything. You can use the tv remote to start the car so it is warm when you get there, and find out if you need to get gas, and if the tire pressure is low.

It is hard to come up with good examples of how this will change everything, but it will do it, in some way that we cannot forsee right now. It would be a multi-billion dollar win for society, but it needs a network effect to get started.

This is all easy stuff to do at a technical level. It isn't hard, it just needs people who are in different industries to talk to each other. If you take existing wireless lan technologies, layer web servers on top of it, and XML for communications, then all of these things are just a matter of agreeing on things. proposing a law or a regulatory requirement might give just the push we need.

2005-11-04

Cars Should Tell You What is Wrong

I'm kind of wondering about car maintenance because I drive a nine year old car with 211,000 kms. on the odo, which runs fairly well, does not have too much rust (this is Quebec, they salt the roads, the cars rust rather quickly.) but, is kind of disheartening. The Engine Check light comes on whenever it is damp, and goes out after a couple of days. When the car was young, I used to take it to the dealer, and they would kind of shrug. The computer would be telling them to replace some 500$ part, and they knew it was kind of bogus. After a few years (it happenned once every six months or at the time.) They figured out it was the ignition cables, that had issues when it was wet.

Once the alternator died with no warning. Another time, a radiator hose broke. I didn't like much being stuck by the highway with the wife, the children and the domestica animals. You know, I remember back in the seventies and eighties, this was kind of normal stuff, and you just dealt with it, and I am. But cars have become a sort of utility, especially in single car families.

The automotive industry today tests for durability, and they have other concerns, like reducing weight to improve fuel consumption, and ensure that things crumble optimally in an accident. The durability tests for components, in my experience, mean that, at around 180,000 kms. stuff starts to break. regardless of make/model. This will only get worse, as car components are increasingly being cost driven and the same contractors sell parts to multiple suppliers. Brand name is less and less an indicator of component quality.

This tells me that 180,000 kms. is about the limit of durability one can expect when keeping in mind cost and other factors. For the car maker, this is past the reasonable point of testing, and they probably want you to buy a new car. It used to be that a reasonable person with a screw driver, a feeler guage, and a few wrenches, could maintain a car literally forever. Those days are long gone. They have become far more complicated. For owners, there is the obvious cost reason to keep the car past that point, but there is also the planet to consider. Sure cars can be recycled, but it is probably much better to just maintain the car you have (assuming it isn't a gas guzzler.) If you want to keep a car beyond that point, the most important thing to you is: how easy are these cars to diagnose and repair? Because that tells you how much time and money you will be spending on maintenance, which is the only major cost once the car is paid. It has become less practical over the past few decades, to keep cars as long as they will last, paradoxically, because QA testing and engineering have improved, so that whereas before, a component would be over-engineered and last forever, now it is made to be 'just right' (not too heavy, not too expensive, not too... durable.)

This should not be that hard. If you design the vehicle with enough sensors, and enough software, it would very likely do a great job of diagnosing itself. It is a good economic investment for the owner that plans to keep the car for a very long time to have really good sensors and diagnostics, because then a repair for an obscure electrical problem will be 1 hour's labour instead of 6. Maintaining an older car could be relatively easy.

Today, I've seen it time and again, people do not get electrical problems fixed simply because it will be too expensive in terms of labour. This should be much simpler. The problem is how to get the motivations in place so that there is an economic incentive for car makers to making older cars easier to maintain.

2005-10-21

Cars: bring costs & complexity down, features up.

So far, people have been putting computers into entertainment systems on automobiles, and into the engine control systems. Today, they are black boxes, which require specialized licensing and software to be able to perform diagnostics. Cars can be cheaper and better.

Today, when you go to a dealer or mechanic with a car with an electrical problem, you are essentially rolling the dice. Electrical systems today are exceedingly complex, and the number of places poor contacts can make things intermittent inside a car is astronomical. The nest of wires everywhere in modern cars, with every single automobile differently wired, has long been, and will forever be, a prime source of non-fatal, but annoying problems with cars. Cars are a mess of wiring.

Instead of an electrical system, there should be a network of computers with a standard communications and power bus. One computer in the interior or human-car-interaction (HCI) system, one in the engine compartment, one in the rear of the vehicle. They should be the same standard hardware, and they should be interconnected with a ruggedized connector that combines ethernet with a 48V DC power for motors & lights. So from the interior to the front of the vehicle, there will be one connector. Even better if it uses a network switch topology instead of joining the two computers back to back. Much more flexible to use a switch. That connector should be triply redundant, having three ethernets (maybe fiber is a good idea for this) and three sets of power. From the interior to the back of the automobile, will be a single connector of exactly the same type. There will be a substantial reduction in the amount of wiring in cars from this division of labour, and use of digital

So now we have three computers. They can talk to eachother over a standard ethernet network, and they can diagnose vote to ensure that at least one link is good at all times. The computer in the interior deals with all the controls that the occupants manipulate, and provide appropriate feedback based on communications with the other two computers. The other two do everything else. They each have a web of sensors... suspension travel, radar, video, tire pressure, temperature in many different places (perhaps via infrared camera.) control the brakes (for advanced features such as anti-lock brakes, stability control, traction control, emergency brake boosting, collision avoidance) all that feeds into the computers at each end of the vehicle and they feed, say, suspension road feel back to the interior computer which will instuct the servos which provide feedback to the driver at the steering wheel.

Why do this? The computers are standard, all cars should use inter-operable ones, and will differ mostly in software. If that happens, then the cost of these computers will be 10$ or less. They all have standard connectors for hooking up peripherals: lights, sensors, etc... Most sensors will be deployed in triplicate or better and the sensors will be the main expense in the system. With sufficient sensors, computers will be able to do a far better job of diagnosing problems, and make accurate, well thought out recommendations about what to do. Any decision tree that can be documented for a mechanic can be programmed into a computer with a sufficient amount of sensors. The car should tell us what is wrong.

Since the computers, sensors, and communications protocols will be standardized, open source versions of the software should show up. So the interior computer issues an standardized request to start signalling right, and the front and back computers begin flashing the right hand side turn signal. They are also monitoring the current draw from the lights, and detect that the left rear signal light is not consuming any current. This is reported to the driver, and the driver can choose to ignore it, store it for later use, or troubleshoot it. The car will then walk the user through a decision tree... is the light shining? Yes? (bad sensor), No? (bulb or wiring) replace bulb... No, it is the wiring, which can be easily replaced because it only goes a few feet to the nearby computer. Assembly of the automobile is probably cheaper because wire installation costs are reduced.


The design is trivially easy to re-wire. There is no complex wiring to worry about because everything is directly connected to a computer peripheral bus (probably USB based, again with ruggedized connectors.) In more expensive cars, they will probably have computers at all four corners for enhanced processing speed for the RADAR or other sensor systems. regardless of the details, the outboard computers will monitor real-time data, and just send abstracts to the network. The font and back computers could communicate directly for things such as varying ABS braking forces, and comparing wheel rotation speeds.

The 'Engine Check' light should be replaced by built-in diagnostics which indicate perform a detailed analysis, rather than an overly simplified if-then, which assumes all the wiring is good. Of course there will be a normal screen, and a way to hook in normal peripherals such as a mouse & keyboard to at least the interior computer, such that it can be interacted with (for servicing) like a laptop/desktop system.

So I want computers in cars in a far more thorough, integrated, holistic way, that makes cars:

  • cheaper to assemble
  • easier to diagnose (more sensors, redundant sensors, intelligence in onboard devices to interpret a consonance of data.)
  • easier to repair (shorter wire runs, it tells you precisely which component to replace (perhaps illustrated on the screen.)
  • scalable to support more features. (dual/quad zone heating adjustment, infinite variety of entertainment systems.)
  • more reliable: Use of fewer, more standard components. Those hardware components will be more thoroughly tested, and in more expensive makes subject to higher quality standards.
  • a good platform for automating the whole task (ie. http://www.darpagrandchallenge.org)

Computers Should Make Stuff Better.

It is clear that we are still at the beginning of the computer revolution. Consumer electronics are very frustrating, Automobiles startingly primitive, home automation doesn't automate a lot of things. This is going to be about how things should be. There are a couple of assumptions that underlie this blog:
  • computers are, or soon will be, very, very cheap.
  • they will talk to each other via networking, which will be standard and very, very cheap.
  • To be useful, sensors are going to be cheap, and computers are going to have lots of sensors.
  • To be useful, computers are going to have controllers to affect things in the physical world, much like an engine control computer for an automobile is critical to having it run correctly today. computers will control things, in the broad sense, in the world we inhabit. To people in a lot of industries this is obviously true today already, but are invisible to people outside it: automotive engine control computers, elevators, washing machines, dish washers, refridgerators are all filled with microprocessors today. In the future, these isolated islands really should talk.
  • telephony, cable, TV, media, broadcasting... they are all in for massive changes, because we do not really need them any more. ubiquitous, cheap, digital networks can transport whatever we want, without any need for broadcasts, which increasingly look like primitive thought control or propaganda methods. Telephony looks utterly archaic. Again, this is obvious to people in the business, but not so obvious to ordinary folks.
This stuff is obvious to me, at least, but others might think it is just nuts. I'm going to write about how stuff should be, based on extrapolation of these assumptions. You will be surprised how far these trends will take us.