One additional thing that will need to be solved for this hypothetical situation. As we grow up, our entire physical being develops a kind of physical awareness that let's us intuitively discern where sensations are coming from in our body by mere touch, and usually more or less instantly. Vision and audio, for instance, are not
needed to know you've just touched a /comfy/ soft blanket, or a cold ice cube spilled onto the counter.
And not only do you recognize immediately these kinds of sensorial cues basically immediately, you also know where
(to a first approximation) the touched item of interest is located, relative to your general body position. Again this is all instinctive to us, and happens 'automatically' with little attention needed for most cases to figure these things out.
Back to the HOT! emergency response, the robowaifu's system will need some kind of touch location-finder
mechanism so she knows instantly where
the hot plate is, and which way
to yank her hand back out of danger. If this isn't done accurately, she could make a clumsy move in the reaction, and possibly damage herself, you, or something else.
Again, this is something we all develop instinctively as we grow up, but for us as designers and engineers we'll have to solve this kind of thing explicitly. I'd guess that a first-approximation approach would be to keep a general sense of all the items in her local body space area's surface normal
. This should at the least give her the direction to quickly move out of the contact danger (ie, out along the surface normal of the object and away). This situational-awareness solution needs to account that this 'normal-map' of her environment is dynamic, as both she and the elements in her environment are potentially in motion with respect to each other.
This is really quite a remarkable domain to tackle from a systems-engineering perspective. Now that I've been applying myself to consider some of the many things all needed, most other design & engineering endeavors seem rather boring to me now. :^)