
The programmer has set up an implicit conversion, or coercion, from $e$.type to $t$, or.$e$.type and $t$ have the same name, or perhaps the same structure, or.$e$.type is the exact same type as $t$, or.This may mean many things (that vary from language to language), but common rules are things like:
When assigning expression $e$ to variable declared with type $t$ (or passing argument $e$ to a parameter $v$ of type $t$, or returning $e$ from a function with return type $t$), the type of $e$ must be compatible with type $t$. Overloading may be permitted to some degree. There can be multiple uniqueness buckets-in some languages, functions and non-functions are in separate namespaces. Idenitifer declarations must be unique within a scope. Identifiers may not be used in a place where they may have not yet been initialized. Identifiers have to be declared before they are used. In no particular order, we might see checks like: Well, that which constitutes static semantic rules varies so much from language to language, so let’s just dump a bunch of generic examples here. These notes pretty much assume you are writing a compiler for a static (or mostly-static) language. Some languages are super static, permitting almost every check to be done before execution. Dynamic languages do almost no analysis at compile time. (Recall “static” means before execution and “dynamic” means during execution.) Semantic analysis applies to some languages more than others. Note that when we say “semantic analysis” we almost always mean static semantic analaysis.
Producing a semantic graph from the AST.
Here are two to get you started: “Colorless green ideas sleep furiously” (Chomsky’s famous example), and “One gaseous solid rocks smelling the color seventeen will think yesterday.” Exercise: Come up with some English sentences that are syntactically correct but semantically nonsense.