Tag Archives: Descriptivism

Notice of interruption to service

28 Mar

When this blog reached its five-year anniversary in 2018, I wrote a summary of the conclusions it had reached so far, and I vaguely thought at the time that, should it run for another five years, that might be a good time for a pause. Now we’ve reached that point – Ten Minutes Past Deadline is 10 years old this week! – and it does seem to be the moment to take stock.

Looking back at the five-year anniversary post, I discover that the blog still essentially agrees with itself in its attitude to the importance of editing, the complexities of online news as it expands into the anglosphere, and the nuanced importance of the role of formal English. The standard of mathematics in newsrooms has not improved over the past five years, and the corrections columns remain as embarrassing to people in our profession as ever. In fact, this is the problem: the blog has settled, as blogs tend to do, into a series of themes, and for a while now has been incrementally exploring them, rather than breaking new ground. It is, perhaps, getting a little repetitive.

So it’s time for some major mental engineering work: tracks of thought will need to be pulled up, sleepers will need to be replaced and some much-needed intellectual ballast laid down. Ten Minutes Past Deadline is not closing – blogs never really close – but the pace of updates will be slower, and motivated more by new thoughts, when they come along, rather than the rehearsal of old ones. Hopefully the work will result, like the replacement escalators at South Kensington, in a less juddery experience for customers, and hopefully will not take as long as that project seemed to.

And, as was the case five years ago, I remain always grateful for the blog’s readers. The visits, the engagement, the comments and the retweets are what make blogging worthwhile, and the content here has always been greatly enhanced by the contributions of others. I hope that we will be back soon with more. Until then, tickets remain valid via all reasonable routes and we would like to apologise for any inconvenience this may cause to your journey.

Nouvelle vague

2 Aug

Vagary is back! By which I mean not in its traditional sense of “foible”, but in its rare, possibly-on-the-brink-of-emerging sense of “vagueness”. We spotted it three years ago in a record review that was even then several years old, and at the time it appeared to be a one-off variation. But the other week, what should appear in the Tribune’s copy queue but this:

Given that the speaker is talking about the clarification of certain issues, it seems clear that what she means by “vagary” is not “aberration” but “ambiguity” – not a definition that has found its way into any dictionary, even though you can appreciate how easily it can be formed as a noun out of “vague”.

And not only that: here’s two more in the wild, from a film review lamenting the generic quality of Hollywood remakes compared with their foreign-language originals:

Here again, “foible” or “whim” make no sense in context, but “imprecision” makes perfect sense.

Now that “vagary” has actually appeared in the subs’ queue, we are confronted in real life with the issue we wrestled with hypothetically the first time, which is how to handle it in copy. I’m not sure it’s anywhere near being understood in this new sense with that spelling, and, as we discovered three years ago, even “vaguery” isn’t widely accepted, even though that might be the best choice for etymological clarity and fidelity to the speaker.

In the end, I went with “vagueness” in square brackets, even though square brackets are the editor’s last resort:

This blog does not like to make too many usage predictions (although it remains confident of the eventual collapse in distinction between “not to be overestimated” and “not to be underestimated”). But if people seem to be discovering a neologism all by themselves like this, with no obvious high-profile precedent, you do get the sense that a new word might be coalescing into being. One to watch.

ї before е

1 Mar

It began on a faintly sceptical note – “what is the BBC up to now?” – but the Daily Mail’s change of heart, and change of house style from Kiev to Kyiv, happened quickly.

Last Wednesday, this article appeared on its website: for the Mail, a rare discussion of the implications of language that came close to publicly acknowledging the existence of the Daily Mail style guide (and how one would love to get a sight of that). And although the headline and first paragraph are redolent of the usual suspicion of the national broadcaster,

the rest of the article is actually an informative and lucid discussion of the question:

“Ukraine’s capital is known as Київ in Ukrainian and Киев in Russian. Both terms do not have a direct translation into the Roman alphabet, with Kiev, Kyiv, Kyyiv or Kiyev all being possibilities. 

But the spelling ‘Kiev’ is intrinsically linked with the old USSR due to its widespread use by the British and Americans while the city was under Soviet rule. 

This continued after independence in 1991, until ‘Kyiv’ was legally approved by the Ukrainian government …

Young Ukrainians see ‘Kiev’ as a relic of the Soviet past, and this view is now shared by the government, which launched a ‘KyivNotKiev’ campaign in 2018. 

At this stage, the Mail is still keeping its journalistic distance – the last line of the article is a brisk “MailOnline has contacted the BBC for comment”, interrogating the corporation on the reader’s behalf. But by Friday morning, the following note had appeared in the print edition:

and by Friday lunchtime, the website was leaping on board a social media bandwagon to get others to follow suit:

A style guide change within two days of first raising a style issue in public, and an explicitly advertised one at that: that’s unusual behaviour for the Mail.

Of course, commentators were not slow to point out that acknowledging preferred terms in this way might set an awkward precedent for Britain’s leading scourge of wokeness. As Neil Fisher of the Times acutely put it: “I love the distinction here between ‘virtue signalling’ and ‘a symbolic show of support’.”

But there seems no disagreement in the Mail, or among its critics, or in linguistic circles, about another aspect of the decision, which is the necessity and desirability of prescriptivism in these instances. Although linguists frequently condemn the imposition of editors’ arbitrary (sometimes very arbitrary) rules on published writing, few ever object when an oppressed group pleads for a deliberate change in language to be enforced. On Google Ngrams, which offers results for English up to 2019, the use of Kiev (blue line) always comfortably outstrips Kyiv (red line), which barely figures on the graph until the 1990s – one guesses as a result of independence – and then kicks up sharply from the early 2000s onwards, at around the time of the Orange Revolution in 2004. Kiev is, strictly speaking, the popular choice over time. But in circumstances like these, no one contends that corpus results should decide an argument on usage. Other considerations prevail.

These debates are not always easy ones: names and spellings matter to oppressors as well as the oppressed, as dictators’ renamings of cities and countries (and, in the case of President Nyazov of Turkmenistan, even days of the week) remind us. The Guardian thought hard before replacing “Burma” with “Myanmar” in copy, weighing the balance between an old name redolent of empire and a new one chosen by a brutal junta.

But the point is: these choices matter. They matter not just to editors in the newsroom, but to the people we are reporting on. Using a name, or shunning it, is, in the words of David Marsh, the Guardian’s former style guide editor, “a way of indicating, or at least of hinting at, approval or disapproval” – a way of signalling your support, and your values. Popularity and precedent, the principles on which descriptivism runs, are not equal to these circumstances. This is the other, not always acknowledged, side of prescriptivism – progressive, rather than regressive, and alive to the resonances embedded in a word, or even a spelling, that a purely descriptivist approach cannot hear.

Death of a Dictionary

4 Aug

Wikipedia; © Merriam-Webster

Manhattan, 1961. He was a charismatic gumshoe with a ready wit, the leg-man for a sedentary detective genius. She was a woman with money and trouble, big brown eyes and a “mouth that would have been all right with the corners turned up instead of down”. In the study of a New York brownstone, fear and murder are about to meet their match. Except there has been an outbreak of descriptivism, so the detective genius is indisposed:

“I’d better explain,” I told her … “There’s a fireplace in the front room, but it’s never lit because he hates open fires. He says they stultify mental processes. But it’s lit now because he’s using it. He’s seated in front of it, on a chair too small for him, tearing sheets out of a book and burning them. The book is the new edition, the third edition, of Webster’s New International Dictionary, Unabridged, published by the G. & C. Merriam Company of Springfield, Massachusetts. He considers it subversive because it threatens the integrity of the English language. In the past week he has given me a thousand examples of its crimes. He says it is a deliberate attempt to murder the — I beg your pardon …”

She was staring up at me. “He’s burning up a dictionary?”

He rarely stands when a caller enters, and of course he didn’t then, with the dictionary, the two-thirds of it that was left, on his lap. He dropped sheets on the fire, turned to look at her, and inquired, “Do you use ‘infer’ and ‘imply’ interchangeably, Miss Blount?”

She did fine. She said simply, “No.”

“This book says that you may. Pfui.”

Webster’s Third, as it is known, caused such a stir when it was published in September 1961 that it was condemned in the comment pages of the New York Times, described as a “political pamphlet” by the historian Jacques Barzun and ceremonially destroyed, as we see, by Nero Wolfe in Rex Stout’s thriller Gambit. For lexicographers, It was a landmark in the journey from prescriptivism to descriptivism that had begun in the 1910s; for the first time, a major US dictionary had been explicitly based on observation of words in everyday usage, rather than authoritative declarations of meaning.

As Wikipedia notes, it eliminated the labels “colloquial”, “correct”, “incorrect”, “proper”, “improper”, “erroneous”, “humorous”, “jocular”, “poetic”, and “contemptuous”, among others, leading to charges that it had abandoned the idea of “proper English”. Looking back in a 2012 article in Publishers Weekly, David Skinner wrote: “Pronunciations came to include a dizzying number of variations, all apparently equal in merit. Most controversial of all was [the editor’s] policy on disputed usages: Webster’s Third adopted a position of scholarly neutrality on words more conservative dictionaries rushed to label colloquial or slang or vulgar. It was a pure dictionary, all about the words, but utterly agnostic on many tricky issues dictionary users cared deeply about.”

It was, then, a classically descriptivist book: admirably humble and egalitarian in its intent, but maddeningly silent on the socially enforced niceties of discourse that readers nonetheless had to navigate. Like much descriptivist literature, it resembles an etiquette book that lectures you on the tyranny of dress codes when all you want to know is how to knot a tie. And although it is widely hailed for its great scholarship, its symbolic role in the culture wars makes it hard for some people to acknowledge even to this day.

In the historic gay rights case Bostock v Clayton County, decided in June, the US Supreme Court ruled that the employment protections of the Civil Rights Act did indeed extend to those unfairly treated as a result of their sexuality. The lead opinion was written by the Trump-appointed Justice Neil Gorsuch – a judge on the right of the court. But although he may have surprised liberals by finding in favour of Bostock, he was apparently still too much of a conservative to rely on Webster’s Third in doing so. As the lawyer and linguist Stephen Mouritsen points out on Twitter, Gorsuch used Webster’s Second (1954) to find a definition of “discrimination” as it was understood at the time the Civil Rights Act was passed, even though Webster’s Third was seven years closer in time to the passing of the Act in 1964.

https://twitter.com/s_mouritsen/status/1272800136985366529

https://twitter.com/s_mouritsen/status/1272800139011227648

https://twitter.com/s_mouritsen/status/1272800140705714176

https://twitter.com/s_mouritsen/status/1272800142429577219

A major consolidation of US civil rights for a minority suffering injustice? By all means. But not with the assistance of That Book.

It’s possible that this view of Webster’s Third has hardened over 60 years, but I’m not sure. One gets the impression that attitudes may have been entrenched right from the moment it was published:

There wasn’t much of the dictionary left, and, while I counted, five-hundreds and then C’s, he tore and dropped. I counted it twice to make sure, and when I finished there was no more dictionary except the binding.

“Twenty-two grand,” I said.

“Will this burn?” he asked.

“Sure; it’s buckram. It may smell a little. You knew you were going to burn it when you bought it. Otherwise you would have ordered leather.”

Vague impression

1 Oct

I’m four years late to this, it’s none of my business and I couldn’t possibly prove it, but I bet this originally said “vagueries”. Or at least, I bet that either:

(i) the writer wanted to say “vagueries”, was unsure how to spell it, assumed the word he wanted was “vagaries” and spelt it thus; or

(ii) the writer spelt it “vagueries” and an editor assumed he meant “vagaries” and changed it.

“Vagaries”, of course, can be easily looked up. “Vagueries” – well, the establishment dictionaries are silent, and only Wiktionary and its like are prepared to essay a definition: “a vagueness, a thing which is vague, an example of vagueness”, per yourdictionary.com.

“Vague” and “vagary” are closely related – the authorities suggest that both probably derive from the Latin verb vagus, “to wander”. But in their journey through middle French and into English they have come to acquire two distinct meanings: “imprecise” and “aberrant”. And, given that Stereogum’s critic is objecting to Coldplay’s    “vague platitudes about walking through fires or turning your magic on”, it is clearly the former that he means.

You might think the “correct” English word in this instance would be “vaguenesses”, but the authorities seem reluctant to countenance that either, at least in the plural. And in any event “vague”, a word that arrived from French, instinctively sounds as though it ought to become a noun in a more French way, by analogy with the same process that has given English “drolleries” and “fripperies”.

If it were the case that the writer wrote “vagueries” and the editor changed it, that would be a shame. Rock critics are traditionally granted a lot of licence in terms of tone, register, syntax, hyperbole, and even decorum, in their reviews, as part of the wide range of voices contained every day in a newspaper. A quick bit of neologising is hardly out of the way in the music pages.

If, however, the writer put “vagaries”, then we face a very advanced editing conundrum indeed: whether an editor should replace a word that is in the dictionary but doesn’t make sense with one that isn’t, but does. That’s quite a big call, but I think the answer is clear. “Vagaries” is just wrong. Make it “generalities” or “platitudes” again if you’re worried about over-reaching your authority, but I think it’s clear from the sound of the word what the writer was trying to do. It might be wise to consult first, but I’d be lobbying to go for it.

 

(And with that, Ten Minutes Past Deadline is off, kicking through the leaves, for its traditional autumn break. See you at the end of the month, the collapse of the west permitting.)

Happy anniversary

3 Apr

Ten Minutes Past Deadline is five! I’d like to say “five today”, but in fact it was five last Friday: the first post on this site went up on 30 March 2013.

Although many subjects have attracted its attention, including baseball, cartoons and the rise of IMDb’s formidable robot copydesk, this blog has all too frequently returned to the subject that first inspired it: prescriptivism and formal English. The first post that ever appeared here arose from years of reading two inspiring blogs – You Don’t Say and HeadsUp – and, through them, becoming increasingly engaged with editing’s big issues: ethics, grammar, ambiguity, statistics, and, above all, language change.

Written in response to a debate on how forward-thinking one should be when editing someone else’s writing, that post was motivated by a slightly defensive sense that although formal English was indefensible, it was somehow important too: and that, even though the case against prescriptivist crotchets was unanswerable, deadline was not the right moment to get into an argument with a writer over notional agreement.

Five years later, that debate is as hard to resolve as ever, but the advice, tips and ideas readers have offered over that time have helped move the blog forward immeasurably. Thank you to everyone who’s read, commented, shared, liked, quoted, linked to, disagreed with and retweeted it over the past half-decade. And, by way of celebration, here is a distillation of what Ten Minutes Past Deadline now thinks it thinks (at least currently) about formal English:

 

Formal English is absurd, but unmistakable

There is no academic justification for the ban on split infinitives, or the stricture that forbids qualifying a sentence with “hopefully”, or the objection to ending a sentence with a preposition, or for many of the other rules taught or followed as being “good English”. And yet, taken together, those rules have come to create a recognisable register: a tone, a rhetoric, a voice. However baseless its antecedents, when formal English is spoken, everyone recognises it for what it is: the language in which power speaks and expects to be addressed.

 

Formal English is not imposed from above

The English language has no central authority, not even an ineffectual one like the Academie Française. Everyone who has tried to suggest usage changes, or best practice, or new words, has had to do so from a position as a private citizen – or, at best, as part of a self-appointed body. None of them have had the power to compel correct usage. The mechanism by which, say, a language commentator’s suggestion becomes a teaching point in primary-school English, which is then carried forward into the solicitors’ letters and leading articles of a generation of adults, is an achievement of influence, not enforcement. Prescriptivism in English has to win hearts and minds; there is no state imprimatur to reinforce the message. Which leads us to a surprising conclusion:

 

Formal English is a descriptivist phenomenon

In Modern English Usage, Fowler suggested dozens of improvements to written English, some of which caught on: some, but not all. In the 1930s, a BBC committee invented dozens of words to describe new phenomena in modern life, some of which caught on: some, but not all. Proposing, it appears, is not enough: every piece of language change, from the accidental to the intentional, has to pass the test of usage.

Some of Fowler’s ideas were terrible, but some – such as his forgotten proposals for punctuating parentheses – were just as useful as his “which/that” distinction, which has become a staple of legal English. Similarly, the BBC committee failed in its primary task of inventing a new word for one who watches television (the corporation rejected  “auralooker” and went for “viewer”), but it did successfully popularise the term “roundabout” for the road junction. The unpredictability of these successes and failures suggests that prescription, just like natural language change, is subject to the mysterious processes of acceptance by which English is ultimately formed. That means many prescriptivist initiatives are doomed to failure: but it does suggest that the ones that have survived to create what we now call “formal English” have passed the stern test of public approval.

A new hopefully

28 Nov

I fear the people who don’t like sentence adverbs are not going to like this:

And, although I don’t normally have a problem with “hopefully”, for once I might agree.

Sentence adverbs – or, as linguists call them, “modal adjuncts” – are adverbs that, rather than modifying the verb in a sentence, express an attitude towards the sentence itself. They frequently appear at the start of the sentence, set off by a comma: “Hopefully, I’ll find them”; Honestly, you may not”; “Frankly, my dear, I don’t give a damn.” Although all such words can operate as standard adverbs* – “she looked up hopefully”; “he spoke honestly for the first time”; “his eyes gazed frankly into hers” – when placed in certain contexts they take on a higher function: one of commenting on the thought being expressed.

Of all the common sentence adverbs, “hopefully” is the one that resonates with editors most, because it became the subject of a brief but heated usage debate about 50 years ago, as Geoffrey Pullum recounts in a blogpost on Lingua Franca:

The 1960s saw an increase in the frequency of modal-adjunct use for another adverb: hopefully. Alongside They’ll wait hopefully (“They’ll wait with hope in their hearts”), it became increasingly popular to use sentences like Hopefully they’ll wait (“It is to be hoped that they’ll wait”).

This unremarkable little piece of linguistic evolution might have gone unnoticed, if the aging usage specialist Wilson Follett had not bristled. It is “un-English and eccentric” to use the word that way, he asserted dogmatically (Modern American Usage: A Guide (1966), page 170), even though (as he said) the German equivalent “hoffentlich” is fine in modal-adjunct use.

Follett was dead by 1963 (his posthumous usage book was completed by Jacques Barzun and others), but he left a legacy: By the late 1960s, using hopefully as a modal adjunct was widely taken to be a grammatical sin.

As John McIntyre observes in You Don’t Say, Follett’s language was ferocious enough to have quite an impact – “how readily the rotten apple will corrupt the barrel”, he says at one point – and the disapproval spread to other style manuals. But it proved to have shallow roots: faced with popular usage and the existence of other unproblematic sentence adverbs in English, such as “mercifully”, people began to retreat from their positions. As Prof Pullum says:

For a few years, battles raged and peevers fumed. But the opposition peaked when disco was young, and Barry White and the Love Unlimited Orchestra were hot. By 1979, [conservative language columnist] William Safire had accepted the modal-adjunct use of hopefully … The dispute was basically over.

It was, having started and finished in less than two decades – although Associated Press, out of an abundance of caution, prohibited the usage until 2012 before finally caving in.

But although the acceptability of “hopefully” as a sentence adverb is now settled, that does not mean that it succeeds as one in all situations. While it is certainly not true that modal adjuncts always need to be at the start of a sentence, or even set off with commas, to work, as Prof Pullum shows in the following example –

Compare “He was flirting with her too obviously”, which comments on the manner of the flirting, and “He was obviously flirting with her”, which doesn’t.

– there is nonetheless something amiss with the Gary Younge standfirst that prevents “hopefully” from functioning as intended.

The sentence is an intricate one: the main subject and verb, “I decided”, is then followed by a long, comparative construction: “ignoring a feted white supremacist was more dangerous than hopefully exposing him”. In fact, the comparative construction functions as a complete sentence on its own; the main verb is “was”, and the subject of the sentence is “ignoring a feted white supremacist” – a verb phrase functioning as a noun, or, in other words, a gerund.

The object in the more/than construction is also a gerund – “exposing him” – and it is this idea of exposure that “hopefully” is trying to comment on, rather than directly modify. But, if anything, it really only succeeds in doing the latter and creating the idea of “exposing in a hopeful manner”.

It is possible to use modal adjuncts with gerundive constructions – “Hopefully, going to the coffee shop won’t make me late” – but I can’t think of an example where they succeed other than when placed at the start or the end of a simple sentence. In this standfirst, however, we have a “sentence adverb” that is neither intended to modify the verb that it sits next to, nor the sentence as a whole, but instead act as a comment on one of two gerunds contained in an independent clause. Setting it off in commas might help a bit, but, I fear, not enough. Sentence adverbs can do a lot, but I don’t think they can do that much.

 

*Also known as “manner adjuncts”.

Future descriptive

7 Aug

STYLE NOTICE: 7 AUGUST 2089


To: All editorial staff

From the production editor

 

Dear all

Several of you have been asking for a definitive style ruling in recent weeks about the now-perennial “cannot be underestimated/cannot be overestimated” debate. I know feelings have run high on the issue, and until now we have tried to preserve the traditional distinction in meaning in our pages, even though the interchangeability between the two phrases in spoken English is now almost total.

Historically, it is true that – as recently as the early 21st century – the correct use of the phrases was highly dependent on context, and to say then that the prime minister’s intellectual capacity “cannot be underestimated”, when the opposite was meant, would have been to cause considerable offence. But the error has now become such a common one that it is time to seriously address the question of whether it is an error at all.

Of course I am aware, as some of you have kindly pointed out, that under and over “mean completely opposite things” and that the distinction is “perfectly obvious to those who are prepared to think about it”. Of course it is, but the everyday rough-and-tumble of language has a way of wearing fine distinctions – even useful ones like these – smooth. Look, for example, at how the similar (and now vanishing) terms “biennial” and “biannual” became so confused in the 1900s that the following definition once appeared in Chambers’s 20th Century Dictionary:

biannual (bi-an’-ū-əl) adj. two-yearly: also half-yearly.

And consider “head over heels” – a phrase universally understood in its metaphorical sense, but which, parsed logically, says the exact opposite of what it means.

I am reluctantly coming to the conclusion that “cannot be over/underestimated” have, through widespread usage, fallen into the same category of phrase as “head over heels”: those that can only be understood in the round, and not by parsing very word individually.

I am aware this decision will disappoint many of you, especially those of you who have pointed me to a significant strand of linguistics scholarship that disagrees with me. Writing in the early 2000s, eminent figures on the influential website Language Log contended against the acceptability of what was then called “misnegation”. Comparing “cannot be underestimated” in relation to the (now-uncontroversial) phrase “could care less”, Professor Mark Liberman wrote:

I’ve argued that “could care less”, where modality and scalar predication seem similarly to point in the wrong direction, has simply become an idiom. Shouldn’t the same be said for “cannot underestimate the importance”?

I don’t think so. As I’ve argued before, there’s a crucial difference.

Whatever is happening with “cannot underestimate” applies equally to “cannot understate”, “impossible to underestimate/understate”, “hard to underestimate/understate”, “difficult to underestimate/understate”, “cannot be underestimated/understated”, “hard to underrate”, “cannot be undervalued”, and many other common ways to re-express the same idea.

In contrast, alternative formulations of “could care less” are rare, and can only be understood as bad jokes, to the extent that they’re not simply puzzling.  Thus one semantic equivalent to “could not care less” might be “could not possibly have less concern” — and we find this in a published translation of Montaigne…

“However, if my descendants have other tastes, I shall have ample means for revenge: for they could not possibly have less concern about me than I shall have about them by that time.”

But in this case, Montaigne means to imply that his concern-meter will be pegged at zero, not at its maximum value. And more generally, we don’t see things like “I could possibly have less concern” used with the meaning idiomatically assigned to “I could care less”. This is the behavior that we expect from an idiom; and the different behavior of “cannot underestimate/understate/
underrate/undervalue” is what we expect from a psychologically probable error.

Other scholars at the time contended that “cannot be under/overestimated” was indeed an idiom; but even if they and I are wrong and it is a mistake, it seems to be a mistake that English-speakers are never going to stop making. And, as we all know to our frustration, appeals to reason over usage rarely succeed in these matters because language doesn’t listen to reason.

Therefore, henceforward,  “should not be underestimated” and “should not be overestimated” shall in all cases be deemed to be equally correct ways of saying the same thing, which is something to the effect of “should not be evaluated incorrectly”. The style guide will be updated accordingly.

Believe me, it gives me no pleasure to come to this conclusion. But our language has changed around us: and with the 22nd century just over a decade away, we have better and more significant things to do with our editorial resources than enforcing a distinction that, to our readers, is increasingly becoming inaudible.

Yours as ever

 

 

Production editor, the Tribune

Nation shall prescribe unto nation

11 Jul

‘I’d have gone for “visionnaire” myself. I’m glad we didn’t get “auralooker”:

Historian Nick Kapur’s fascinating Twitter thread about the BBC’s Advisory Committee On Spoken English and its influence on modern speech reveals just how close we came to referring to anticyclones as “halcyons”, but also offers an illuminating insight into what prescription in language really means.

Because of course, there is not one kind of linguistic prescriptivism: there are two. One opposes all language change and all neologism, and attempts to conserve current norms as an eternal standard. But the other seeks to deliberately modify language: not to reject new words, but to invent them, and to influence speech and writing to go in new directions – such as the campaigns to popularise Ms and Mx as neutral  honorifics. It is this second kind of prescriptivism, which one might call activist or progressive prescriptivism, that Kapur is tweeting about here.

The story begins, he relates, in 1926, when Lord Reith sets up a committee to help resolve one of the many problems a pioneer national broadcaster has to solve: how should you pronounce certain words on air? (This group, the Advisory Committee On Spoken English, still exists today, doing very similar work to help BBC broadcasters). Then in 1935, faced with the question of what to call users of the new media of the day – television –  a new sub-committee was set up, not just to advise on pronouncing words, but to invent some new ones. Led by the Anglo-American man of letters Logan Pearsall Smith – an eager language reformer – the Sub-Committee on Words generated the alternatives listed above to start the debate (although it eventually rejected all of them and recommended “televiewer”, subsequently shortened to “viewer”.)

After that, the sub-committee remained active, and widened its remit to mass-produce new words for broadcast far beyond the new industry’s immediate needs, eventually becoming so extravagant and implausible in its inventions that an exasperated chairman of governors closed it down in 1937. But by then it had created several terms – “roundabout” for the road junction, “serviceman” for members of all the armed forces, “art researcher/art historian” to replace the German word “kunstforscher” – that are now commonplace in modern English.

The impression descriptivist scholarship frequently gives is that language is an unknowable stew of errors, localisms, homophone confusions and misreadings, prone to unpredictable change. The emphasis, or the cultural preference, often seems to be bestowed on the unwilled variations to language, not the willed ones. But Kapur reminds us that English is also highly susceptible to the approaches of those who have a design on it, from Edwardian grammarians like Fowler to equalities campaigners to spelling reformers like McCormick at the Chicago Tribune. There are words and conventions in many registers of modern English that were created deliberately by people who wanted to see them catch on and took the opportunity to make it happen.

Sometimes, of course, prescriptivism is institutional, and benefits from that privilege. It might be justifiably argued that the BBC’s committee, as a quasi-official body proposing usage for the nation’s only broadcaster, was in a very strong position to succeed, particularly as it was inventing terms for then-unnamed phenomena. But the Academie Française, which is attempting to do for French today almost exactly what the BBC committee did for English in the 1930s – and from a similarly state-sanctioned position – is greeted with widespread indifference and derision for its efforts.

And in any case, innovative prescription does not need an official platform to succeed. This blog has discussed at length the extent to which Fowler’s suggestions have influenced modern formal and legal English, but Fowler himself was no state official, nor did his books bear any government imprimatur (although Churchill is said to have recommended Modern English Usage to his staff after it came out). His books were a success because, then as now, there is a sustained public appetite for advice on how to engage with formal English. (Indeed, given the existence of a generation of professional linguists who consider it their role to observe rather than advise, the field for such material is possibly clearer today than it was then.)

This is not to say the process is easy: frequently, big innovations just don’t catch on.  There is no doubt that some of the committee’s ideas, like some of Fowler’s, are much worse than others: for example, one member apparently felt it desirable to create a shorter term for “inferiority complex” (“inflex”), and another proposed “yulery” as a collective term for Christmas festivities. The point is not that Fowler or the committee were always “right” about what they proposed; the point is – at least sometimes – that they were successful.

Usage remains the timeless, and the only, judge of current English. But usage does not simply adjudicate on terms that have risen up unbidden from the demos; it also sits in judgment on peri-statal prescriptions and private linguistic entrepreneurialism. Due process is afforded to all new words, whether they are accidents or designs. Linguists say that language is a democracy, and it is: a democracy in which, among other things, anyone is free to prescribe and see what happens.

Too chill for comfort

13 Sep

If you were looking for snark, the official Twitter feed of a major American-English dictionary might not be the first place you’d look. But, oh boy.

A few days ago, Gabriel Roth of Slate unwisely allowed his inner prescriptivist out for an airing after reading the following tweet from Merriam-Webster Dictionary:

Articulating the silent twinge that many editors and writers feel at the sight of descriptivism in action, he wrote:

screen-shot-2016-09-12-at-12-31-52

And then unexpectedly this reply, from the dictionary itself, appeared:

screen-shot-2016-09-12-at-16-29-02

Ouch. Owned. Or – to use the correct spelling of the word in this context – “pwned“. As a rueful Roth wrote later, “I find myself wistfully remembering the days when tweeting at brands was a safe, innocuous pastime”. And other responses to M-W’s intervention have been broadly favourable: the tweet was rude, yes, commenters thought, but also uncompromisingly truthful about the ineluctable nature of language change.

However, scrolling down through M-W’s Twitter feed, it emerges that this is not the only time it’s taken a bold line in such matters. Five days earlier, in similarly lively terms, it made the following observation:

Well, hang on. Yes, “enormity” can indeed mean “great size”, and has done for centuries. But, no, it’s not “fine”: currently, as a word, it’s totally skunked. As we discussed last month, “enormity” is hovering uneasily on the brink of a permanent change in meaning, but is still tending to drag its other meaning of “moral horror” into simple discussions about size. It’s a very tricky word to be employing at the moment; a while ago, for example, we saw fit to remove it from a news story about the heated subject of the Scottish referendum because of its overtone of opprobrium. It’s far from clear that, in these circumstances, a major dictionary should be recommending it quite so breezily. Authorities are looked up to; these things get taken seriously.

As this blog has had occasion to remark before, people don’t require help with informal English. They speak it well. They do not seek the assistance of their editor friends when composing a tweet or posting on Instagram; but they do, sometimes, when updating their CV or writing to a solicitor. What they want is help with formal English: a register whose social significance they grasp, but one in which they perceive themselves not to be fluent.

This is when they turn to the dictionary: to be briefed on the meaning of a legal idiom, or the appropriate use of a word in their own reply: to find out, perhaps, whether “enormity” means what they think it means. But they are doing this at a time where one of the prime objectives of linguistics is the debunking of the prescriptive maxims about language that have been taught during last two centuries. An unsatisfactory dialogue has therefore developed between linguists and the public in which queries about the niceties of formal English are met only with assurances about the validity of informal English. For the last several decades, it seems, lexicographers have been talking about what’s changed in the language while their readers have been asking about what hasn’t.

The spirit behind this objective is democratic to a fault, and the efforts to expose the frailties of formal English are intellectually impeccable. But nonetheless, they are starting to amount to the total deconstruction of a dialect that many people still have no choice but to speak.

The ghosts of Fowler, Strunk and White still haunt the sphere of formal discourse. It is highly commendable that more modern authorities like Merriam-Webster should be getting involved in the conversation about usage. But burning a grumpy prescriptivist on Twitter? Waving off debate about a word in difficult transition?  That isn’t advice; it’s advocacy. Roth is right: counsel as blasé as this is just a little too chill for comfort.