The camera on the HMD sees what the consumer sees, and with the use of object recognition technologies can identify what products the consumer is seeing, and what activity they are engaged in. It will also know their location via GPS, and possibly know who they're with (e.g. with facial recognition of people or communication between devices).
The HMD can then ask the consumer questions about the product via the HMD, e.g. "What made you pick product X over product Y" (when the consumer picks from two products on the shelf). The consumer can give a verbal reply (which would be automatically transcribed) or selects (touching a virtual button in mid-air) from a range of options.
The cost to the consumer is intrusion, but the actual time taken to answer a question would be measured in seconds. Such research should of course be opt-in, and could be paid-for (micro-payments for micro-research). The consumer could have a "not-now" button to stop the micro-market research if they were finding it annoying.
The key advantage is context: the device knows where the consumer is and what they're looking at, so the questions it asks are relevant. This relevance makes the question less intrusive to the consumer, and more useful to the researcher.
This methodology would be impractical with existing mobile technology, as the consumer would need to point their smartphone at the products they're looking at, and answer questions on screen. This would take 10s of seconds, rather than the low numbers of seconds with a HMD, and would probably be too intrusive.
No comments:
Post a Comment