The 18th century view of language was that languages naturally fell into decay because of society’s decadence or disinterest, i.e. people took their language for granted and were sloppy in their use of it; some of the results of this sloppiness stuck fast and the language changed accordingly to suit. As evidence, scholars pointed to the fact that older languages tended to depend on highly complex grammatical inflexions – word endings – to show their grammatical function in a sentence (for example, the ‘show possession rule’ in Old English was ‘add the suffix, –es’). Over time, it can be shown that the number of inflexions dropped, and in the case of modern English, we have almost done away with the need for inflexions.
Unfortunately for this theory, it has (at least...) one major flaw. Even though the number of inflexions has dwindled, other parts of speech such as the use of auxiliary verbs (such as ‘do’, ‘will’, etc.) have evolved to take their place; also syntax (word order) has now become very important to clarity of meaning. And it is true that anything that can be expressed in the ancient tongue can still be expressed equally subtly today. Ultimately, this theory is highly subjective, as it relies on personal opinions, not scientific facts, regarding what is ‘highly evolved’ and what is ‘decadent’. This is not, therefore, science.
This idea that language is a very special thing and, thus, needs preserving is still in evidence today. It’s certainly true that it is special and what separates us from the beasts! But how far should this go? The French have a language academy, a government department no less, to monitor language change and to try to prevent some of it occurring. Here, teachers and much of the media choose to use a kind of ‘gold standard’ prestige variety or dialect of English called Standard English. The problem with this ‘gold standard’ is that it tends to create a hierarchical system in which some regional and social dialects become looked down upon, despite the fact that even the most extreme dialects have regular grammatical structures and work perfectly well to express sophisticated and subtle ideas.
Another theory says that language is an entirely natural process and that language changes are automatic and therefore cannot be observed or controlled by the speakers of the language. What is, to the human ear, a single ‘sound’ is actually a collection of very similar sounds. This is called ‘low-level deviation’ from an ‘idealized form’. The argument is that language change is simply a slow shift of the ‘idealized form’ by small deviations ‘a bit at a time’.
The obvious problem here is that without some kind of reinforcement, the deviation might go back and forth and cancel out any change. The theory was then extended to try to accommodate this problem by adding reasons for reinforcing the deviation such as simplification of sounds, or children imperfectly learning the speech of their parents and the imperfect form eventually becoming dominant.
This ‘simplification of sounds theory’ suggests that certain sounds and sound combinations (e.g. butter – with the phoneme ‘t’ sounded) are easier to pronounce than others, so the natural tendency of the speakers is to modify the hard-to-say sounds to easier ones (e.g. bu’er – with a glottal stop instead of the phoneme ‘t’). Another example of this would be the Italian word ‘cam-e-ra’ meaning ‘room’ changing into early French cam-ra. As it is hard for English speakers to say /m/ and /r/ one after another, this became ‘simplified’ by adding /b/ in between, to /cambra/ (and so leading to modern French ‘chambre’). A more recent example is the English word ‘nuclear’, which many people pronounce as ‘nucular’, ‘government’ as ‘goverment’, ‘Arctic’ as ‘Artic’ and so on.
The problem with this part of the theory is that since not everything in a language is hard-to-pronounce, the process would only work for a small part of the language, and could not be responsible for a majority of sounds changes. Secondly, it is highly questionable to determine whether ‘nucular’ or ‘nuclear’ is easier to pronounce. You’ll get different answers from different people. Simplification no doubt exists, but using it as a reason (not a symptom) of language change is probably too subjective to be scientific.
The other part of this theory suggesting that children incorrectly learn the language of their parents, doesn’t hold water either. If we look at an extreme case in the form of immigrants into England, what is found is that the children of immigrants almost always learn the language of their friends at school regardless of the parents’ dialect or original language. In fact, children of British immigrants in the United States nearly always speak with one of the many regional American accents. So in this case, the parents’ linguistic contribution becomes less important than the social group the child is in. Which leads to theory number three…
This theory is a social one and has been advocated by the eminent American linguist, William Labov.
What Labov found was that a small part of a population begins to pronounce certain words that have, for example, the same vowel, differently from the rest of the population. This occurs naturally since humans cannot all reproduce exactly the same sounds. However, at some later point in time, for some reason, this difference in pronunciation starts to become a signal for social and cultural identity. Others of the population who wish to be identified with the group either consciously or (more likely) unknowingly adopt this difference, exaggerate it, and apply it to change the pronunciation of other words. If given enough time, the change ends up affecting all words that possess the same vowel, and so that this becomes a regular linguistic sound change.
We can argue that similar phenomena apply to grammatical change and to lexical change. An interesting example is that of computer-related words creeping into Standard English, such as ‘bug’, ‘crash’, ‘net’, ‘e-mail’, etc. This would conform to the theory in that these words originally were used by a small group (i.e. computer scientists), but with the boom in the Internet everybody wants to become technology-savvy. And so these computer science words start to filter into the mainstream language. We are currently at the exaggeration phase, where people are coining weird terms like ‘cyberpad’ and ‘dotcom’ which not only drive some people crazy but also didn’t even exist before in computer science.
Labov’s theory of language change sounds much more plausible than other previous theories; and it is the latest theory... Humans are, after all, social animals, and we rarely do things without a social reason. We are also deeply bitten with the idea of superiority and power, and so Labov’s social theory of language change – and no doubt others that will follow, do seem to make the most sense.