Commentary

Artificial Society

We have trained ourselves to be robots.

Example: let’s say you’re an engineering professor. Let’s say you’re even a good one who worked for thirty or forty years in a private practice, where you actually did real engineering for a paycheck. Perhaps you weren’t good enough to rise to the rank of a firm partner or anything, but you at least retired with a good package and decided to pursue a side gig in teaching. So now, at sixty-five, you’ve found yourself with an adjunct faculty parking space at the community college down the street, and a roster or two of engineering students.

Engineering is a technical category. A lot falls under its heading, from electrical to civil, and for our example here it doesn’t really matter which it is that your background is in. We can keep it simple and say it’s civil. What matters is that you’re enough of an expert to have practiced the subject matter and to present it in a lucid fashion. But you’re not an expert in machine learning, and you’re an adequate but not terribly proficient writer, insofar as you spell your words correctly and your grammar earned passing marks from schools that still taught it back in the sixties. More importantly still, civil engineering classes are not ones that require creatively worded essays peppered with amusing or complicated syntax; if anything, as the professor, the clever bastards that write that way just make the task of grading papers all the more arduous.

The value of the papers is instead how lucidly they communicate the technical features of the project or concept in question. You know this. You had to once submit reports to the county and state review offices, and all of them, with only some exception, were very stale template-powered stacks of paper. Frankly, you believe that if such civil reports could have been accurately automated, they should have been, as it would have spared you and the draftsmen you assigned to type them many hours—days—weeks, even months, perhaps—of man hours over the course of your whole career.

And then, one morning while refilling your coffee at the department’s lounge, you overhear a heated conversation among two of your younger colleagues—both full-time instructors and both working toward getting tenure. The subject: ChatGPT and student’s using it to cheat on short written assignments. Cheat, you think to yourself. Was it really cheating to use an advanced tool to complete busywork?

The engineer in you almost wonders whether this is a bad thing. You really have no idea whether everyone in your class, or just that one guy in the back, or even if anyone at all is basically cheating on their papers. It’s hardly plagiarism, since machine learning algorithms can hardly be considered free agents who can have their work stolen and impersonated. They’re tools, after all, and they’re doing exactly the function such tools were designed for. If you can automate tedium, what’s the problem?

But then the semester passes and you’re at your desk grading the final exams, a majority of which are actual problems that require some basic combination of physics knowledge and problem solving. “Farmer John needs a retaining wall built on a hill that rises ten feet over an uneven slope of eighteen.” “Plot a storm water management draft for this parking lot.” That sort of thing. And half the class fails, an out come that does not at all reflect the results of the papers you’d graded over the last two and a half months. And you look at your syllabus, and the weights attributed to the papers and the two exams of the semester, and that good grades on papers can, if good enough, outweigh bad grades on the finals, if also not-bad enough, and that most of the people who failed the final are going to get passing marks for the class.

These students of yours who failed the final exam have no idea how to do civil engineering, but because they could use a machine to generate accurate tables, measurements, equations and technical data on some papers, they’ve passed your class. Given what class you’re teaching, some of these students might be sending out resumes as early as next year. But how did it come to this?

Undergrad Writing Skills

To step back for a second, consider the immediate context of the hypothetical: the modern university. That a complex set of algorithms could generate, on demand and after one or two adjustments to its prompts, perhaps, a paper warranting passing marks from a professor isn’t something that should come as a surprise. No matter the field, undergraduate papers aren’t anything to write home about; aside from term theses—and in some cases, even including these—they’re short assignments intended to gauge a student’s awareness and familiarity with the generalities outlined by the class syllabus. More detailed and trying information isn’t brought to a student’s attention, usually, until graduate level work, if a student bothers to put up with additional schooling after receiving their bachelor’s.

In addition to this, most undergrads are between the ages of eighteen and twenty-three. Even the most finely educated of this demographic only rarely produces writing of a caliber beyond that of what used to be considered basic literacy about a century ago. As many have pointed out over the years, one wonders the extent to which literacy rates have actually risen over this period versus how much the term ‘literate’ has come to be abused. Recognizing one’s own name on a sheet of paper and stumbling through an office memo should count as literacy.

One can attribute the destruction of literacy to the public schooling apparatus, in part. Federal, state, and city/county wide mandates on curricula, standardized testing, and the generally poor stock of character common to both the teaching and administrative bodies all point toward a maliciousness directed at the acts of learning. The not-so-gradual change in the country’s demographics has also coincided with an increasingly difficult classroom environment to teach in, even when a school manages to hire good instructors.

Discriminatory hiring policies, coupled with years of social engineering has discouraged the more level-headed among us to consider teaching or education as viable career paths. Instead, these have led to the most neurotic and easily programmed—or worse, the most perverted—to pursue education as a means to some self-righteous end.

Academia, too, has been hollowed out over the past forty years by both ideologues and grant-hungry opportunists. The shift in demographics is even more apparent considering the affects of both the long standing practice of affirmative action, as well as the now well-documented incidents of even Ivy League schools discriminating by race according to admittance testing. This is only important when one considers the state not only of the instructional body at academic institutions, but also the caliber of studentry to be found there.

The important issue is this: the degeneration of the education apparatus, particularly in America, led not only to lowest-common-denominator instructional methods, but also an emptying out of critical thinking abilities across most if not all high school and college graduates. Certainly, a few slipped through the cracks, as anecdotal accounts can supply. But in large part, those fortunate ones who have obtained for themselves the ability to self-reflect and think critically on matters beyond the narrow scope of their field are those who do so in spite of their educational background. In all likelihood, most of these people discovered how to do this after they had graduated from their formative years in school.

Critical thinking applies to how one engages with information. How one engages with information includes how one engages with media, with the written word, with just about anything that carries semiotic or symbolic values. Much of this is referred to as ‘brain drain’: the gradual retardation of American ingenuity and thought that worries the previous generations of social infrastructure. But it’s important to note that although its causes are many, the common thread weaved across all of them is how intentional this brain drain phenomenon is. And for that, we must consider our next example.

Chinese Room Syndrome

This example is brought to us by John Searle.

A man is in an enclosed room, and his only access to the outside world is the crack of daylight beneath the door. In this room is a computer. He has no idea how it works, but it’s a perfect translation machine—allegedly—to take inputs from the man’s native language and supply outputs in written Simplified Chinese. The man does not know a single character of Chinese, and for the sake of argument, doesn’t even know that the spoken language is tonal. It’s possible the man does not even know that Chinese is a language. Perhaps he considers the strange lines and dots the machine prints out to be some sort of esoteric code.

The man’s task is simple. Periodically, from under the door comes a slip of paper with a statement on it. The statements are irrelevant to the man. He takes the statement on the paper, feeds it into the machine, and the machine supplies him a translation in Chinese, which he then dutifully slips back into the outside world. As it turns out, the machine can work in reverse, too; every once in a while, a slip of mysterious Chinese characters comes in from outside and, after feeding it into the machine, a statement or two in English falls from the output and this, too, is released into the wild.

This happens a lot, but how often a day, the man couldn’t tell you even if he wanted to.

Outside, the man’s room is known as the Chinese translator’s office, and it’s kept locked for some reason only the higher-ups at hand might ascertain. But to everyone else, the Chinese translator lives in there, and he seems to know and understand Chinese better even than the visiting Chinese diplomat that toured the facility back in March. They had a conversation about The Dream of the Red Mansions, but the diplomat did not realize that he was talking to himself the entire time.

Embellishments aside, Searle’s thought experiment was, at its most limiting, intended to illustrate the errors of attributing consciousness or self-awareness to artificial intelligence. The machine does not know what it is doing; the machine, in fact, does not ‘know’ anything in any personalistic sense. The machine operates according to formalized and preset rules and instructions, and then it produces outputs when inputs are given.

This is still true when considering current day ‘AI’ capabilities. The algorithms have improved considerably, the learning processes of these programs have greatly increased in complexity, but there’s no person at the center of the machine. No ghost in the shell, so to speak. The leap from autonomous self-correcting algorithms and self-modifying learning programs into full personhood, complete with awareness, memory, will and intellect is an unbridgeable gap no matter what any computer scientist doped up on Asimov tries to tell you.

It’s simple enough to consider the Chinese room within the context of AI specifically, but extrapolate that out to what we know about modern education, both in its public schooling apparatus and its university system. Lowest-common-denominator instruction, coupled with standardized testing programs, means that instructors are compelled to teach to the regional exam. Underachieving students learn to mark their answer sheets in ways most likely to get at least a C average, while overachievers memorize the sort of syntax, paragraph structure, and jargon that their teachers instill in them for written assignments. Any notion of internalizing knowledge and learning the material escapes not only the grasp but even the interest of both ends of the spectrum.

Granted, sometimes overachievers have genuine interests that academic environments help bring out, and true, sometimes underachievers are highly skilled in productive hobbies that academic environments have no ability to foster. But in both of these cases, the important part is understanding the root of public education. Overachievers aim to please in order to navigate their way through the system, while underachievers don’t care and have already given up on finding gratification out of academic achievement. No individual of either group is primarily interested in learning anything that the education apparatus ostensibly has to offer them, and no teacher within that system is interested in teaching it, either. Those few who are quickly wash out after the burdensome constraints of bureaucratic mismanagement and pettiness hammer their idealistic egos into dust.

Consider what this means with regard to actual knowledge. A middling student who fell in with overachievers can graduate in the top quarter of his senior class, earn qualifying marks on everything from physics and pre-calculus to literature and history, only to forget it all in about a year. Unless he uses some aspect of it in university or the workplace, none of that information occupies a region of his psyche that qualifies it with meaning.

Without interest, there is no reason for a person to correlate meaning to what he is being instructed in. And without either meaning, interest, or instruction, this person never gains knowledge. This is what is meant by engagement with a given material. A person can read a book, but if he’s not interested in it, or if he was never taught to think and reflect on it, then he never learns how to correlate the written word before him with the meaning that those words are supposed to convey.

So we return to the Chinese room. Perhaps much of this sounds hyperbolic to anyone who emerged from the public schooling apparatus mostly unscathed, and that’s fair enough. But it shouldn’t sound that exaggerated. Take, for example, the stereotypical midwit: he perfectly defines the sort of high school or university graduate who wasn’t engaged quite enough to be an overachiever, but neither was he repelled enough to be an underachiever. Instead, he managed to color within the lines provided to him by a dissociated public servant in order to get some grades that sent him to a university that made him do it all over again. If he’s lucky, he encountered a couple of professors there that actually did care about whether he learned their material. But those professors can only work with what they’re handed by the public education apparatus.

The long and short of it is this: it’s difficult to sympathize with professors bewildered at students who use machine learning to effectively fake their assignments. The method of education intrinsic to a mass industrialized public education system has made this quite predictable, and arguably impossible to avoid. They are reaping what their predecessors have sown: in the most modernist sense, formal education crafts its students into veritable Chinese translator’s offices.

Non-Player Characters

But we can go even further than this. Why did schooling get to be the way this way in the first place? Penetration of DEI ideologues into the heart of education doesn’t adequately explain how so many students have grown increasingly unable to think for themselves. Laziness on the part of instructional and administrative staff could account for specific cases, but it’s impossible to convincingly level that charge against every teacher employed by the systems involved. And, although immigration from the third world has done a lot to force the national averages of test scores and reading comprehension down, this fails to explain why the systems of education themselves seem hard-coded and designed to turn human beings into robots.

We can trace this problem back, predictably, to contemporary modern thought. Although it has is precedents in the centuries leading up to the Great War, many broader implications of modernism would not be realized until the build up to and realization of Europe’s great continental suicide of 1914. By this, we can refer specifically to the implications of mass industrialization, notions of dualistic philosophies of mind (and subsequent efforts to reduce mind to mere brain matter), and the increasingly specialized fields of science and medicine that heralded a reign of middling experts over humanity’s so-called progress. The wake of both world wars resulted in the rise of both the managerial class who ran the increasingly complicated networks of infrastructure, production and services in America, as well as the expert elites who took it upon themselves to dream of big utopias operated by Smart People such as Themselves.

As is somewhat well known, this great fulfillment of modern thought can be understood as both a philosophical and theological error (or heresy, for those so inclined): that personhood is reducible to material substances and that it can be defined according to such material substances.

Philosophically, this means atoms and chemicals, sinew and grey matter. Electrical phenomena that neither neuroscientists nor analytic philosophers of mind seem to totally understand compose the totality of human consciousness, and these reverberations of wattage around one’s skull work to think a person into everything from moving his body to make coffee to contemplating neuroscience to appreciating the way his lab assistant’s kapris hug her calves.

Theologically, it replaces man’s ability to find definition according to his placement in God’s vision with man defining himself in contradistinction to inanimate but tangible matter. He is from dust and to dust he will return; this much remains obvious, but in modern thought, the person tasked with stewarding this dust in the meantime gets erased almost entirely. It might seem like a paradox to consider this in light of modern indulgence, particularly given the extent to which gluttony, lust and sloth reign over contemporary society. But each of these are impulses that characterize man’s erasure from himself and remove him further from beatific vision. They draw him away from his identity rather than help to better define it. Such is the nature of sin.

If defining man according to his tissue is more important than defining him within the scope of salvific history, then questions of morality become little more than distant retreating figures in an ever advancing rear-view mirror. Of what use is morality to a collection of objects, even if those objects do, for some reason, have sentience and willpower? It’s convenient to get along with one another, of course, but without an ultimate justification past the vague idea that life might be easier by pursuing virtuous ends by virtuous means, moral systems quickly collapse into acknowledging systems of power and behaving accordingly.

Worse still, these systems of power naturally grow to encompass the totality of human action. That one must know what he is doing in order to act, and that a man’s action defines his character are both ideas antithetical to this sort of collectivist mindset. Rather, the power system, particularly in bureaucratic systems, relinquishes its members of responsibility and, therefore, of even having to think about anything. It provides a cover under which its bug-like inhabitants can hide their shameful subservience to the void.

Most of us by now are familiar with the NPC memes that circulated the internet a few years ago. People who reported having no internal monologue, or an inability to ‘see’ a particular object interiorly when that object is mentioned and discussed. Rotating an imaginary cow across three axes would be met with similarly blank stares of incomprehension. On one hand, it is not beyond belief to assume that some people simply lack the awareness and necessary pattern recognition to either notice or even develop these mental traits, particularly when things like IQ is taken into account.

On the other hand, most charitably, it seems unlikely that people who are capable of operating in a western society, and those specifically with an average IQ of around a hundred, would lack such basic tools for survival. And there are a couple of obvious reasons for this: interior monologues have been used as narrative devices for millennia, and usage of thought bubbles in everything from illustrations to more subtle depictions in classical works of painting indicate that conceptualizing an interior world or formulating thoughts into language before vocalization are hardly novelties of human existence. Furthermore the simple acts of basic survival, such as building huts and houses, domesticating animals, practicing agriculture or craftsmanship—all marks of a complex if pre-industrialzied society—require enough interior awareness to conceptualize projects before carrying them out. It seems absurd to even have to mention this.

However, perhaps the term pre-industrialized society is the breaking point. Not to come across as a Luddite, but prior to mass automation, people had to think quite a bit more in their daily lives, and as commentators like to point out more often, the consequences of obvious stupidity tended to range from life-alteringly injurious to downright lethal. This may sound ridiculous considering how technical the job market has become in the last half-century, but consider the general state of the public when off the clock. The whole notion of an on or off switch to labor is, itself, somewhat modern, and the erasure of leisure has only made this crisis of thinking worse.

Life is not as physically dangerous as it once was. A subsistence living is much easier to fall into, rather than to strive in order to achieve, and the gross malaise of postmodern ennui is something too many in the west have simply grown up with. But so too do our educational institutions inform us that the soul doesn’t really exist, that the human body is something of a mysterious automaton, and that human capital is as natural a resource as ore mined out of the ground by shovels. Worse, not only are we informed of this in schools, but the post-industrialized economy has already undergone a change out of the earlier industrial revolution that seemed to verify these ideas in the first place. Now it functions on an exported industrial revolution while priding itself, largely, on technical services. Where once there were peons who still worked with their hands, albeit in a factory, now they are denied even that.

Admittedly, this has given rise to the current paradox. The infrastructural necessities of our service economy, ranging from the logistics of maintaining supply chain networks, to the prudence of preventing investment banking networks from collapsing, to the technical knowledge required in medical, defense and software industries, requires specialists with higher than average training and expertise. This is true both in the public sector as well as the private. But while this sort of expert class, above that of the mere managers, has been made fundamental to the survival of our present civilization, it has reduced the broader pool of people who need to think to survive past the limits of their expert-written itineraries.

Education for the Machines

We return, then, to our education system.

Some internet users have complained on Reddit—fittingly—that programs intended to detect use of AI generated text have a less-than-favorable success rate in actually doing so. Apparently, students can write just competently enough to be accused of being machines, and college professors have no way of adequately dealing with this aside from logging it in some casebook somewhere. And as one user suggests in the replies, there’s always the option to ‘run it through ChatGPT’ so it won’t sound so much like an AI-generated response. One self-alleged adjunct professor even bemoans the use of AI-detecting technology; when run through the same detection software, his own papers were apparently judged to have been written by a robot with 100% certainty.

To compound the observations made by these Redditors, and the point made earlier about the quality of undergraduate writing, it turns out that college professors aren’t the only ones having a hard time differentiating between the words of robots from those of flesh-and-blood students. Linguistics experts are, too, and this article was from early September of this year—a whole three months ago. As far as the field of machine learning is concerned, that’s ancient history.

Researchers used ChatGPT to write research abstracts and presented these to linguists in order to determine whether they were artificially generated or written by human beings. “Experts from the world’s top linguistics journals,” the article states, “could differentiate between AI and human generated abstracts less than 39 percent of the time.” The actual figure: 38.9%, and this from a pool of 72 linguistics experts who were each given four samples a piece.

The study itself outlines the methods a bit more in detail. Predictably, it’s locked behind academic subscription services and thus unavailable to the common public, but excerpts still exist at ScienceDirect. According to its abstract, the study was conducted to determine three things:

1) the extent to which linguists/reviewers from top journals can distinguish AI- from human-generated writing, 2) what the basis of reviewers’ decisions are, and 3) the extent to which editors of top Applied Linguistics journals believe AI tools are ethical for research purposes.

The study’s abstract informs us that “despite employing multiple rationales to judge texts, reviewers were largely unsuccessful in identifying AI versus human writing,” and then providing the 38.9% statistic mentioned above. Not one of the experts correctly identified all four samples they had received, but thirteen of them correctly assessed three of four while nine of them were wrong about every sample. The rest of the reviewers were split down the middle in correctly assessing either one or two of the four samples provided.

On one hand, the relative sophistication of academic research will can imply that fabricating research abstracts goes well beyond the sort of expertise needed for comparatively casual undergraduate essays. As far as social and academic importance goes, this is true. But there’s an important caveat: abstracts aren’t in themselves difficult to write. Like all forms of technical writing, any semblance of style or flair is intentionally absent from research writing. The result is, at least in theory, purely data-oriented, somewhat formulaic and dry reading material that only exists to present certain information.

Fabricating whole studies without AI ‘hallucination’—in which the machine makes things up and usually contradicts its previously generated text—is still beyond the scope of ChatGPT, at least for now. But fabricating an abstract? Keep in mind what machines can do best. ChatGPT has already scoured the internet for information; when tasked with presenting that information, all a prompter has to do is tell it to write its response in the form of a research abstract.

The abstract is just another genre of writing, and the machine can follow the rules of that genre in order to imitate it and generate new abstracts. That they aren’t truly abstracts, as they’re not abstracting any actual study, has nothing to do with what the machine was tasked with carrying out. It’s just a matter of repackaging existing information in the form of a particular genre.

This is with the current tech. ChatGPT-5 might be due out sometime in 2024, according to some rumors. It’s quite possible it’ll be functioning well before then, and there are reasons to believe that unlike its predecessor, which is hamstrung by ‘hallucinations’ when prompted to generate long form writing, that it’ll probably be capable of imitating whole research papers and generating entire books. More than that, its interactive capacity is almost certainly going to far outstrip anything we’ve had interaction with before, and this isn’t just in a simple chat form.

Sam Altman, the co-founder of OpenAI, which created ChatGPT, has already expressed concerns about how these tools can and will impact future elections. He’s not even particularly worried about DeepFake technology but rather what he calls “this customized, one-on-one persuasion ability of these new models.” He goes on to clarify this as “systems on the internet powered by AI to varying degrees that are subtly influencing you.”

To some extent, these systems already exist. More worryingly, the infrastructure to support them, both the technological hardware and, more importantly, the psychological mindsets of the users which these systems would be fooling, are also already in place. Altman speaks of most internet users having already developed “antibodies” to DeepFake technology, and while he may be speaking from a position of optimism, most people with even mildly developed and attentive critical thinking awareness can sniff out current DeepFake fabrications. But developing antibodies for the sort of subtle suggestion he’s talking about requires a psychological immunity that attacks the framework of modern thought itself. One has to go against the sort of internalized robotic thinking discussed earlier. While it’s not impossible, it’s certainly a tall order to ask of the common public.

Conclusion

The worst part of all of this, and perhaps the most controversial to consider, is that the field of machine learning is not itself bad. Far from fears of either a Terminator-like Skynet or a Star Trek-like Soong-class android, AI is intelligent in its purest definition. This intelligence lacks a will save of the will of those telling it to do things. And it’s not going to just magically obtain one, either. Artificial intelligence does not artificial persons make.

The future of communication online does look bleak, however, given the inevitability and pervasiveness of AI and machine learning. Dead Internet theory was conceived, whether ironically or not, years ago, but it’s already a growing and somewhat obvious reality. AI is already used to generate content, imitate internet personalities, fabricate news, and even impersonate love interests. But the issue at heart is not whether one is capable of fighting the advent of this technology. That ship has sailed. The issue is to determine what place it deserves in one’s own life, and more importantly, what place those fields it is positioned to dominate deserve to have one’s own life. Media, entertainment, news, all are going to be hit by this, even if or when government regulation attempts to put its fingers on some end of the scale to either mitigate the damage or at least tilt the damage toward favoring its own ends. The only apparent answer to the AI revolution is to stop functioning like a robot. It’s to stop thinking like one. It’s to stop being one.


Self Promotional Blurbs:

Subscribe to our mailing list:

Want to support our work? Consider buying us a few beers or, better yet, becoming a monthly subscriber at Ko-Fi. $5 or $10 a month grants access to exclusive content.

Liked it? Take a second to support Merri on Patreon!
Become a patron at Patreon!

Merri

Merri lives with his wife and kids in the USA. He writes on topics ranging from the Catholic Faith, secular politics, and cultural critique. Contact him through The Pillarist or on Twitter at @MPillarist.