This paper addresses the problem of mapping natural language to its semantics. It presupposes that the input is in random (compressed) form and proceeds to detail a methodology for extracting the semantics from that normal form. The idea is to enumerate contextual cues and learn to associate those cues with meaning. The process is inherently fuzzy and for this reason is also inherently adaptive in nature, It is shown that the influence of context on meaning grows exponentially with the length of a word sequence. This suggests that rule-based randomization plays a key role in rendering a field-effect natural language semantic mapping tractable. An example of rule-based randomization for semantic normalization is as follows. Suppose that two commands to a robot are deemed lo be equivalent; namely, "Grasp and pick up the glass" and "Hold the cup and raise it". Their mutual normalization might then be, "Grab container. Lift container." Clearly, the randomization process can be effected by rules. Also, the normalized syntax makes the result of any semantic mapping process - such as detailed herein - more efficient. A natural language front-end is described, which is designed to reduce the impedance mismatch between the human and the machine. Most significantly, the effective translation of natural language semantics is shown to critically depend on an accelerated capability for learning.