The ultimate solution is probably an always on video camera that records exactly what you see (this may well come for different reasons, once society has evolved to accept the privacy concerns) and records what you eat via object recognition. This is then automatically looked-up against food databases to give accurate nutrition information.
In the absence of this non-existant technology, what are the other options? A tiny accelerometer could be embedded in the jaw to give data on mastication as a proxy for how much you are eating (and talking, and chewing chewing gum, etc); a sensor could be embedded in the stomach to measure something (perhaps pH?); some kind of sensor in the oesophagus?
I think it is worthwhile having an input sensor in addition to a sensor that measures the impact on blood substrates. This way we can measure the impact of particular foods on blood substrates, and therefore determine what to eat.