If cheap enough, the machine can understand the precise location and posture of a human. Mentioned in the article are car seats. Imagine a bed which adjusted itself to minimize pressure points.
I should mention a project out of CMU by Chris Atkeson and Daniel Wilson, where he put only a few cheap accelerometers in the floorboard of a house. The algorithm processing these sensors could localize humans in the rooms with remarkable accuracy. The challenge then becomes sensor fusion and system integration, in using this information to boost performance of the entire system. For instance, a human tracker using vision alone would be dwarfed by such a system which had a reasonable seed guess from pressure sensors.
The second application is for rich manipulation. A robot grasping a glass must do so with enough pressure to not drop it, but also enough sensitivity to not break it. I doubt humans use significant higher reasoning in this process, unlike the advantage humans have over computer vision programs. Rather, robots could sense the weight fairly easily, but also the type of surface, and learn how brittle such a surface is.