In Nicolas Poussin’s 1638 painting The Arcadian Shepherds, or Et in Arcadia Ego, unsettled shepherds gather around a starkly cut stone tomb, while one of them traces with his finger the inscription; “and [yet] in Arcadia, [here too] I am.” They are accompanied by an imposing female figure, stylistically based on the Juno Cesi statue currently in the Capitoline Museum. With her diadem, blue and yellow clothing, and hand on the back of the shepherd pointing at the letter “R” on the inscription, she most likely represents the personification of Reason, offering some form of comfort or interpretation of their disruptive discovery. A laurel tree rises behind the tomb, and the shepherds are wearing laurel wreaths. The line is adapted from Virgil’s fifth Eclogue, where two shepherds lament the death of Daphnis, son of Hermes, lover of Pan, and follower of Artemis, joined in mourning by the whole of the natural world.
Daphnis was a mythical hero of pastoral culture. Raised by Sicilian shepherds on the slopes of Mount Etna, he was said to have sung the first pastoral poems. He lived in prototypical pastoral fashion: tending herds, composing songs, dancing with Pan and Artemis. His death was brought about after he fell in love with Aphrodite herself, wasting away with the withering vegetation in the heat of summer, and entrusting his pipes to Pan as he was hauled off to Hades by Eros—the general fate of any mortal lover of the goddess.
While Daphnis was Sicilian, the essential concept of the lost pastoral idyll that he represented became associated with the province of Arcadia, in the central Peloponnese of Greece, hence the location of his tomb. Many of the specific features of the Arcadian ideal—rolling hills, misty sun-dappled valleys, scattered groves of oak and poplar trees—derive from actual landscapes. As a landlocked and mountainous interior region unsuitable for large-scale crop agriculture, much of Arcadia was indeed sparsely inhabited by pastoralists, said by classical urbanites to have their own distinctive mythology, and to preserve ways of life long lost in teeming cities like Athens and Alexandria.
Before there was Arcadia, there was Eden, and any number of myths of lost harmony with nature and the origin of alienation. Human cultural traditions seem to universally produce these stories, and our modern civilization has proved no different, from the Industrial-era English preoccupation with preserving the beatific countryside, to today’s environmentalist assumptions, both popular and academic. But Arcadia contains cadavers rather than world trees. The image of a pastoral landscape as homeostatic is only possible due to a severe lack of temporal perspective. Rather than a timeless refuge, pastoral systems are one of the first results of the transformations of nature by human beings. Historically, they were its leading edge.
The foundational image of ancient Arcadia was “the hills in which herdsmen summer their flocks—in the classical world, the border between tilled land and wilderness.” These borderlands could be located along the expansion front of an agrarian society, or form pockets “left behind” for reasons of climate or topography. Evan Eisenberg goes on to note that “[this] setting is especially apt in that pastoralism is in the short run the easiest and pleasantest, in the long run the most destructive of ancient ways of life.”
Ancient Arcadia itself was not spared from widespread erosion and deforestation due to relentless goat browsing pressure, which has some responsibility also for maintaining the parkland-like landscape associated with the concept. The currently somewhat improved ecological state of the region is due to ongoing state-imposed goat population reduction. This started with the Metaxas regime’s 1937 declaration of “war against the goat,” featuring compulsory slaughter of goats that entered designated forest areas, and forcing shepherds to move to cities or switch to field agriculture.
Ancient authors were well aware of the damage caused by overgrazing, and of the instability of pastoral systems. Throughout classical discourse, a dialectic developed between the positions that the observed deterioration of Mediterranean ecology under human exploitation was an inherent result of the general aging of the earth, as posited most notably by Seneca, or that it was caused by human mismanagement and excess. Pliny laments that “[we] blame the barrenness of the earth on its age, as if it were growing old and exhausted. But it is our own faults—our excessive greed for pasture, the trampling of flocks, and the constant burden of agriculture—that have drained the earth of its strength” (Natural History, Book 18, Chapter 1).
The corollary that deterioration caused by human mismanagement should be modifiable by improved management was generally not drawn at societal scale. Classical agricultural manuals certainly give descriptions of practices which would preserve soil structure and fertility, but these are couched in terms of individual ethics, to be followed by the virtuous farmer, and address crop agriculture more than pastoralism. By the late 4th century, classical society was suffering from structural problems of sufficient severity to render the question of an organized response moot, and Ambrose rails that “The flocks consume all in their path, leaving the earth as if it were plundered. How will the earth yield her fruits when she is stripped bare by the appetite of beasts and the folly of men?” (Hexaemeron, Book 3, Chapter 16).
Given that disharmony and destruction were apparent, and that ancient Mediterranean society did not generally see itself as capable of competent ecological management, because it could not conceive of itself as a system able to offer such a coordinated homeostatic response, there had to be a reason for this obvious relational rupture. Something had to have died. And here we find the body of Daphnis of the forest, the representation of this loss and alienation, who could feed his “fair flocks” in harmony with Pan and Artemis. Daphnis was tricked by Aphrodite into falling for a city-dwelling princess as part of the arc toward his demise. This is no accident.
But, as his inscription implies in its switch to direct address in the penultimate line, perhaps Daphnis is alive, and can be found from the forest to the stars, if only we can see past his tomb. Though our understanding of ecology and biology today is far deeper and more granular than the classical world’s, we are strikingly similar to them in one key respect: we too fail to notice our full ability not just to negatively affect, but also to heal and to steward, the natural world. Since the Industrial Revolution, our release of long-trapped carbon into the atmosphere has made us, whether we like it or not, geoengineers. We should not try to ameliorate this outcome with half-measures, but rise to meet our newfound responsibility with the full breadth of our technological tools and in cooperation with nature’s own gatekeepers of carbon: plants.
The Amazon Was Created By a Lost Civilization of Gardeners
There is no intrinsically correct form of the concepts a society uses to structure its relationship between matters seen as internal to its processes, and those seen as forming the external frame of the world within which it is contextualized. Concepts such as Daphnis dead in Arcadia, or the formation and separation of ideas of the “human” and “natural” worlds, come about to explain observed patterns, events, and causes.
Given that the entirety of any human societal processes are as fully dependent energetically on plant photosynthetic production as all other parts of the heterotrophic biosphere, accounts of separate and uncontrollable natural forces, and of changes in the complexity of the internal processes and productions of a human society, are part of the formation of a societal self-concept, rather than accurate descriptions of energy and influence. Much can be learned by examining which elements of the world a society decides to define as outside of its domain of responsibility, and the ways that this patterns its engagement with reality.
In 1998, Betty Meggers, one of the leading American archaeologists studying Amazonia, wrote that terra preta, the dark, fertile soil that is curiously found in the Amazon, “is unlikely to have been cultivated indigenously because habitation sites are also burial grounds.” Meggers wrote within an intellectual tradition that was incapable of understanding the evidence before its eyes of large-scale human engagement with the biosphere outside of the familiar frames of field agriculture, pastoralism, and urbanism.
In Amazonia, humans created and lived within a managed forest over thousands of years and miles, in a manner invisible to Western lenses. Additionally, Meggers was predominantly responsible for the propagation of the standard view of Western anthropologists and archeologists in the mid-twentieth century, that indigenous Amazonians were, at best, agriculturalists of opportunity, scraping a meager harvest from nutrient-depleted tropical soils, famous for their lack of fertility. Her 1971 book Amazonia: Man and Culture in a Counterfeit Paradise, introduces the trope that seemingly paradisical Amazonia was a place where nature imposed harsh and intrinsic limits on human habitation.
This view became untenable with the demonstration in the last decades that legends of lost garden cities of Amazonia were in large part historically accurate. This was a result of many avenues of evidence, most compellingly in the last few years with the large-scale use of lidar to uncover remnant structures beneath the forest canopy. A 2022 study in southwestern Amazonia found that “large settlement sites are surrounded by ranked concentric polygonal banks and represent central nodes that are connected to lower-ranked sites by straight, raised causeways that stretch over several kilometres. Massive water-management infrastructure, composed of canals and reservoirs, complete the settlement system in an anthropogenically modified landscape.”
Amazonia was the domestication site for many of our currently dominant cultivated plants, including cacao, peppers, pineapple, sweet potato, and tobacco. Out of the roughly 16,000 known Amazonian tree species, 50% of the total forest cover is composed of only 227 species, just 1.4% of the total diversity—yet these species are overwhelmingly those useful to humans, or species that modify their local ecology in a direction favorable to humans. The scale of human management suggests a pre-Columbian population of around 10 million. Previous estimates under the Meggers hypothesis ranging as low as 500,000 are now discredited. But the fundamental issue remains that freshly cleared rainforest soil is extremely infertile. In a high-energy, high-diversity system such as a tropical forest, the vast majority of available nutrients spend almost all their time rapidly exchanging between the bodies of living organisms, which are specialized to quickly absorb decomposition products, leaving little to accumulate as a soil reservoir. So how were all these people fed?
In much of Amazonia, the lower populations in recent centuries allowed a form of shifting cultivation that is often termed “slash and burn,” or swidden agriculture. A plot is cleared of forest, and the felled vegetation is burned on the site, forming a surface layer of soil enriched with nutrients in which crops can be grown for a few years before fertility declines, after which humans move on to a new plot, allowing the first to recover through natural regenerative succession. Assuming no banking of nutrients, this method does impose limits on population density, according to the recovery time, and is only sustainable at low density.
In the Brazilian Amazon, freshly-cleared forest soil in an area without history of human cultivation is known as terra comum, common soil. Repeated cycles of swidden agriculture result in an accumulation of carbon and other combustion products in the soil, somewhat improving its fertility, this soil is then known as terra mulata, brown earth. The most fertile soil, terra preta, is enriched in carbon to a greater degree than terra mulata, also containing a mixture of compost products, animal bones and feces, pottery shards, and other evidence of human modification. This is the soil that Meggers mistook as residue from burial grounds and middens. But these soils cover between three to ten percent of the total forest area, far more than could be accounted for through funerary practices, or as the unintentional remnant of a low-density population. In Bolivia, the majority of the forested area surrounding watercourses in the llanos could likely be anthropogenic, as nearly all of it is underlain by highly human-modified terra preta.
Carbon as a soil additive has several unique properties; most importantly, it “adsords” or adheres charged particles onto its charged surface, and is porous with a vast surface area when the feedstocks producing it are burned in the right way. These advantages accrue to the current use of biochar in soil, in addition to it being a fixed and extremely stable form of carbon, with negligible atmospheric return.
Amazonia then is an example of a highly human-modified system, in no sense an untouched wilderness or counterfeit paradise. This was invisible to Western archeologists because the boundaries they expected to see were not present, and the cultivated species were intermixed with the wild ones. Eduardo Neves, a Brazilian archeologist at the forefront of the reevaluation of the history of Amazonia, notes that “[the] Indigenous worldview does not differentiate between the domain of culture and the domain of nature. The diversity of the Amazon, the presence of many large nut trees and fruit-bearing palm trees, is a result of Indigenous practices.”
These indigenous practices also increased the carbon held in the land as charcoal many times over what it would have been in the absence of human management. What appeared as wilderness is in reality a garden forest shaped to human design, and certainly not an Arcadian edge. Deforestation for monocultures of soy or cattle pasture is destroying an interwoven artifact of human action as part of the biosphere—not a pristine system alienated from human influence—and replacing it with a simpler, energetically less favorable one.
Once we have the ability to see the relational system connecting the structure of the forest system to the human activities that modified it, conceiving of a system such as Amazonia as in some way separate from its human inhabitants becomes self-evidently erroneous. We have here a cultural blindness and essentialization leading to a lack of recognition of the evidence before our eyes of long-term intentional human management. Similarly, our appraisal of phenomena such as anthropogenic climate destabilization is bounded by the extent of our contextual knowledge, being that the Earth system as a whole is also not energetically separable from its human inhabitants.
Arcadian Thinking Limits Our Responses to Climate Change
As the human influence on Amazonia flew under the radar of Western agriculturalists and anthropologists, so too often does the energetic reality of the climate crisis escape adequate understanding when viewed through eyes determined to separate the cultivated from the wild. All of the geoengineering methods that have potential to enable homeostatic regulation of the climate system lie across this boundary, because they all involve targeted interventions to change the parameters of planetary-scale processes. In this way, a small energetic input can have an exponentially greater effect or, in other words, an intended consequence.
Ethics that place these methods a priori out of bounds will hobble our ability to respond as a species to the problem our own success has presented to us, because systemic changes are appraised indistinguishably from highly negative unintended consequences. Often, in a specific instance, the problem space is comparatively limited. For example, Populus tremula x alba 717 or “Deer” for short is a well known variety of hybrid aspen that exhibits a more rapid carbon assimilation and therefore growth rate by a novel and evolutionarily innovative mechanism.
Yes, this would result in forests in which this was the dominant species of Populus showing a faster overall growth rate, and therefore they would not exactly match the properties of a preindustrial forest, but we must be free to question why improving the carbon removal capacity of the terrestrial biosphere should be equated with returning it to whatever form it was in at whatever time slice our society derives its Arcadian ideal from, or some notion of a wild uncontaminated by human influence.
Every open system drawdown approach suffers from its own version of this image problem. In the case of oceanic iron fertilization, an equation with “dumping” and assumption of the inevitability of toxic algal blooms and some combination of zero effect and systemic breakdown has limited experimentation for decades. Too often, the methods of carbon removal companies are dictated by the idiosyncratic preferences of carbon markets built to reward comparatively miniscule scale removals by closed system approaches such as direct air capture, which is widely known not to be scalable in a relevant time frame to remove carbon anywhere near the rate we need, but which has the advantage of producing a product (compressed carbon dioxide gas) useable by the fossil fuel industry, at rates sufficient to offset the emissions of software companies.
Concepts are imported which make sense in a business context but which are entirely inadequate to the task at hand, that is, altering the control parameters of a continuous planetary flux process. A focus on permanence and traceability of specific removals to specific customers only makes sense if equal weight is given to scalability and the percentage of energy input under direct human management. Even some open system approaches have become elaborately baroque in their attempts to satisfy the demands of myopic and largely image-driven carbon credit issuance registries.
Behind all of this lies the simultaneous anxiety and reassurance of the availability of solar radiation management as a comparatively low-cost solution, of a sort, should the situation continue to deteriorate without sufficient near-term carbon removal. But solar radiation management solves the temperature increase problem in the near term while making other problems worse. It will not prevent ocean acidification and it will reduce incident solar radiation in photosynthetic wavelengths, while also removing the temperature rise control signal for carbon accumulation. At high atmospheric carbon dioxide levels, the therapeutic index of solar radiation management is narrow, that is, the required annual aerosol dose falls between narrow bounds, and the consequence of underdosing in particular in a given year would be wide swings in average global surface temperature, due to the underlying climatic forcing being counterbalanced by the dimming sulfur dioxide.
Humans have the potential ability to be the first species in the history of the biosphere to enable homeostatic regulation of the carbon dioxide concentration in the atmosphere, if we abandon our desire to alienate nature from human influence, through recognizing that this is a consequence of our countervailing desire to act without consideration of the presence of living ecological systems throughout even the most human-dominated environments, and to ignore our dominant influence on planetary energy flux.
The only long-term solution to our spiraling negative impacts on the climate system and the functional connectivity of the global ecosystem is to work with the rest of the biosphere to create ecologies which assume the presence of human influence as a given factor. This is how the only species yet produced by the earth system to understand our own origins can open our eyes to the threads connecting us to the whole, and in doing so awaken from our Arcadian dreams to create new forests and regenerate waterways in partnership with the species surrounding us.
The only ethic that should apply here is that of systemic functional improvement in retention of energy and communication between living systems. This requires that we work with boldness in cooperation and restraint from alienation and a priori categorization. These attitudes have been in eclipse recently in many different contexts, but the rewards are great, no less than bringing Gaia to life through discovering and applying control inputs to planetary carbon metabolism. We would be functioning as the eyes of the biosphere seeing itself for the first time.
Yes, we must decarbonize our energy production, but this will not serve to remove what has accumulated over the last several centuries, or to reduce our global ecological dominance as energy-intensive heterotrophs. We need to go talk with the autotrophs, the plants, and empower them to remove the carbon we have emitted, for both our sakes, but especially for our own. We must ask them to increase their rate of drawdown, and reduce the rate of return of the carbon they fix.
They will not do this without being asked. Increased atmospheric carbon dioxide deselects plants for efficiency of carbon removal; while it does stimulate growth rate, it makes metabolically expensive carbon capture optimizations less likely to evolve and spread. Increased temperature accelerates the rate of respiration-mediated decay of biomass and liberation of fixed carbon into the atmosphere so, likewise, the autotrophs will not pull this lever for us. Thankfully, most of the words are already written and we can cut and paste most of our messages together. The human discipline devoted to facilitating genetic communication between organisms is called biotechnology, and if harmony is found in communication, it is here where we will discover Daphnis alive.
Conceptually Confused Regulation is Blocking Biotechnology Progress
The African iroko tree is oxalogenic: the tree secretes oxalic acid from its roots. Over time, this oxalic exudate accumulates and decomposes to calcium carbonate, or limestone, with the accompanying liberation of half its carbon as carbon dioxide. Therefore, the soils surrounding the roots of an oxalogenic tree become enriched in fixed carbon at a ratio of half the carbon in the oxalate secreted. Oxalogenic trees are rare. Iroko is one of the only well-known examples, and it has a restricted distribution and a slow growth rate; it takes over a century of growth for the stem of an iroko tree to approach a meter in diameter.
Genetic biotechnology could enable the elucidation of the oxalate secretion pathway in iroko and the introduction of this pathway into faster-growing tree species with broader climatic tolerance bands. The introduction of these trees into areas where they effectively integrate into local ecologies—likely those that match the ancestral range of the recipient species chosen—would enable an increase in the rate of carbon removal in the forests of the regions in which they were planted. Because these trees would still perform their original role, including reproducing on their own, as well as removing more carbon from the atmosphere, this would enable a significant shift in global carbon flux toward drawdown. No method like this will be sufficient alone, but integrating genetic biotechnology with open system drawdown holds significant potential to meaningfully contribute to carbon removal.
But unfortunately, under our current regulatory framework, such methods are almost impossible to test, even on a small scale. Creation of a fast-growing, oxalogenic tree could proceed within the walled garden of academia, but widespread deployment even of a method with demonstrable promise is almost impossible. The Salk Institute, an organization named after the creator of the first polio vaccine, and so—one would think—committed to significantly impacting global human problems, has spent close to two decades researching a genetic biotechnology innovation that increases soil carbon storage by upregulating suberin (cork) production by the roots of grasses.
Field trial after field trial has been run, success has been demonstrated, but no deployment has occurred. The institute has a division dedicated to this work, the Harnessing Plants Initiative, who narrate that they have “made significant progress in the lab,” and have registered the trademark Ideal Plants. All this seeming progress obscures the fact that there is almost no viable regulatory route to widespread adoption. The Salk Ideal Plants only differ in specific respects from other varieties; in the case of the Salk plants the root system is larger, and the cork wall of each root is thicker. These differences, while specific to unusual anatomical regions, are no larger in scale than those between many horticultural varieties. They are certainly less than those between, say, domesticated maize and its wild progenitor teosinte. However, the status of biotechnology regulation is such that how a thing is done carries far more weight than what it is.
It is no accident that most genetically modified organisms with legal approval for unrestricted use are bulk foods. Only agricultural-chemical conglomerates have historically had the financial and legal resources to shepherd an innovation through the arduous approval process, which can easily take over a decade, while organizations such as the Salk Institute provide demonstrations but do not assume the responsibility for deployment. Meanwhile, genetic modification of bacteria and fungi to produce drugs and foods proceeds apace and arouses little protest, and the idea of leakage of these organisms into the environment being an existential risk hardly exists itself.
Beyond the specific regulatory issue is market-driven adoption. Concern for mitigating anthropogenic carbon emissions has resulted in the issuance of carbon credits and the development of systems of carbon accounting, mostly on a voluntary basis but with increasing governmental support globally. These credits are associated with determinants of quality which stipulate conditions beyond removal of carbon. Because participation in carbon markets and the purchase of carbon credits are generally voluntary acts associated with the environmental movement, the determinants of stipulative quality of a given carbon removal tonnage include many conditions beyond verification of the removal itself. The two dominant carbon credit registries in the United States both include a stipulation that the use of genetically modified organisms (GMOs) for removal be prohibited for credits issued.
In the baseline public consciousness, despite a little recent progress, “GMOs” are still often considered a simple bad and their avoidance an obvious necessity for the ecologically literate, especially GMOs placed in natural environments such as forests. So biotechnological methods, such as that of the Salk Institute, that have shown efficacy but would only ever work if a significant portion of photosynthetic carbon, either in agricultural or wildland systems, were fixed by their plants, are unlikely to be viable without changes in public understanding. The idea of GMOs as a special danger, and the intrinsic stigma it generates, compounds on top of the general difficulty of open system approaches in gaining acceptance.
The idea that organisms produced by genetic biotechnology represent the possibility of catastrophic contamination of a previously pure biosphere has caused a strange Arcadian fixation, seen both in perception and regulation, on ranking the techniques used in their production on a scale of how much impurity they introduce. The rest of the biosphere, however, continues to communicate for its own purposes, neglecting to respect any such rules.
There are currently three dominant methods of introducing novel genetic script to a eukaryotic organism’s genome: particle bombardment (adhering DNA sequences to physical carriers such as gold nanoparticles to provide mass to shoot directly into a cell), transformation using a living mediator (using the pre-existing ability of organisms to integrate sequences into a host genome), and a class of enzyme-mediated methods such as CRISPR often collectively termed “gene editing.” These have all been developed since the 1980s.
The division of the U.S. Department of Agriculture (USDA) responsible for biotechnology regulation is known as Animal and Plant Health Inspection Service (APHIS). In 1987, APHIS established regulations for “the safe movement and importation of genetically engineered organisms,” simultaneously with its publication of regulations aimed to control species that spread disease. These regulations specified that Agrobacterium, the major organism used in plant genetic transformation, was a plant pest and regulated any genetic sequence identifiable as deriving from Agrobacterium as being a “plant pest sequence.” APHIS went on to establish severe restrictions on import, export, trade, and movement on all organisms containing “plant pest sequences.” Because genetically-transformed organisms can be identified by the presence of flanking sequences derived from the Agrobacterium vector used in their transformation, any product of Agrobacterium transformation was regulated in this way.
This introduced a fixation on “genetic purity” into the legal structure dealing with the entire nascent field of plant biotechnology. Other methods of transformation, such as particle bombardment, were not regulated in any way by the USDA, as long as they too did not involve the introduction of “plant pest sequences.” Although the intent was to regulate the products of biotechnology in themselves, the letter of the law portrayed the introduction of minute genetic sequences from disease-causing organisms used in transformation as somehow capable of introducing the contaminating essence of the pest into the organism hosting them. The regulatory superstructure which developed therefore could be imported wholesale from existing rule systems serving to reduce the risk of spread of infectious diseases, but with a whole class of organisms transformed via non-vector mediated methods being excluded.
There are very few examples of ecological problems having been caused by genetically-modified plants. Those that have happened underline the importance of separating our evaluations of technique and application. To much of society, genetically-modified plants as a class are synonymous with the specific example of glyphosate-resistant crops, with the added conceptual confusion of the actual use—enabling spraying of glyphosate to kill weeds in the crop without harming it—with nebulous ideas including the production of glyphosate by the plants themselves, or simply an unknown and therefore uncanny enhancement that could escape and dominate.
The problems caused by these plants are part of the general problem inherent in managing large areas of land in a manner that progressively depletes the soil biome, reduces mineral nutrients, and prevents carbon sequestration. That is, they are the problems of industrial agriculture itself. Nonetheless, the historical domination of the regulatory system by the concerns of the purveyors of these plants has led to patterns of decisions by the USDA illustrating the incoherence of its legislative approach.
In 2003, a creeping bentgrass variety (Agrostis stolonifera), transformed by Monsanto to be resistant to glyphosate, for intended use on golf courses, escaped from a field trial in Oregon. The grass has since spread widely through multiple habitats across the state, and will likely continue to expand. In 2016, APHIS approved glyphosate (brand name Roundup) resistant creeping bentgrass, claiming that it did not have enhanced weediness potential, long after it had escaped from cultivation. Creeping bentgrass is a Eurasian species brought to North America during European colonization and had already shown itself capable of establishing itself in place of native grasses in disturbed areas, with a history of “noxious weed” classification.
The intended use case for the grass was to improve the ease of weed control with Roundup on golf courses. APHIS noted that “[a] primary herbicide used to eliminate [creeping bentgrass] where it is not desired is glyphosate,” but that alternate herbicides were available. Glyphosate is indeed the dominant herbicide used by ecological restorationists seeking to remove invasive plants such as bentgrass, so it seems bizarre that the USDA could claim that a bentgrass variety specifically engineered to be resistant to this herbicide did not have increased weediness potential.
Noting the incoherence, the fact remains that Roundup-resistant Agrostis remains identical to conventional Agrostis in all but one troublesome aspect, and a minor threat at best. The voluminous field trial data and long timeline required for the approval stem from the intensive investigation of properties of the plant, triggered by the presence of “plant pest sequences” in the construct, although the grass was transformed via biolistics. These included “agronomic, compositional, disease and insect evaluations,” for “65 field releases in 20 states and 40 counties performed between 1999 and 2002.” Unsurprisingly, none of these properties differed from conventional bentgrass. Glyphosate-resistant creeping bentgrass spreading from highly managed and intensively irrigated golf courses is hardly a poster child for integrative biotechnology, but is more annoyance than danger.
The vast expense poured into gathering evidence to support the approval of such a marginal candidate, which was already causing problems in the wild when the decision was made, while organizations like the Salk Institute write letters to congress petitioning for funding to run the same exhaustive field trials required for approval of their plants, makes clear some of the processes retarding the balanced development of plant biotechnology. The Scotts-Miracle Gro company had already abandoned commercialization of the grass when approval was granted, and the only mention on their website now is a brief historical account and a phone number to report the grass for eradication.
After over thirty years, APHIS updated its regulations in 2020 in a piece of legislation termed the SECURE (Sustainable, Ecological, Consistent, Uniform, Responsible, Efficient) rule, replacing the determinant of regulability from “plant pest sequences” to whether a given transformation “could have happened in nature.” The implication now was that “unnatural transformations” were more dangerous and thus more regulatable than those that could have occurred without human intervention.
Because living organisms exist in a context where high energy particles occasionally collide with the nucleotides in their genomes, and because the replication of these chains of nucleotides is not absolutely perfect every time, changes in individual nucleotides, including deletions, additions, and replacement of one of the four by another, are common events. These are known as point mutations. Also common are transpositions of sections of the chains, changing the order of the sequence within an organism.
The framers of the SECURE rule classified these as natural transformations, and applied minimal regulation to human methods that mimic them, known collectively as gene editing. Regulation increases as more nucleotides are changed, the reasoning being that every additional nucleotide transformed takes us further from the state of nature. Regulation is most stringent in instances where sequences from one species are introduced into another, being that this is seen as the least likely scenario to occur in nature. The organisms created by these methods are termed transgenic, as opposed to the cisgenic products of gene editing.
The recent advent of CRISPR and other methods allowing targeted single nucleotide changes at specific points in the genome has made anthropogenic point mutation relatively simple, and a gene can often be rendered inactive, more active, or have the properties of its protein product modified, by one or only a few nucleotide changes. The text of the regulation allows the stacking of repeated single nucleotide changes, so long as each stage of the project is given individual approval. But gene editing is not capable of introducing entire new metabolic pathways to an organism. These need genetic constructs enabling the production of novel proteins, either in code written by humans, or taken from a donor organism, creating a transgenic synthesis.
Gene editing is not, as usually portrayed, simply a technological advance in bioengineering that facilitates easier and safer application; it is rather a specific method that allows targeted genetic changes to small numbers of nucleotides. CRISPR will therefore never replace transgenic transformation, but rather is a complementary method. Even in instances where metabolic transformations could be achieved by specific patterns of gene editing, this would be slower and clumsier than simply introducing a construct.
This is not to imply that the results of gene editing are generally less major in their effect on the organism, only that it is incomplete as a transformation toolkit. Taking a step back, we can see that what APHIS did with the SECURE ruling was to replace one criterion of purity with another. The plant biotechnology industry reoriented itself around the new measure, and immediately began to produce literature emphasizing the safety and technological advancement of CRISPR, positioned as a replacement for the supposedly clumsier and more dangerous older transgenic methods. Unfortunately for the framers, there is very little that does not happen in nature, and naturally transgenic organisms are relatively common. Like other methods before, gene editing is now often used for specifically regulatory reasons, not because it is better and safer, but because it does not produce a GMO as defined by APHIS.
In December 2024, a Northern California district court vacated the SECURE rule in its entirety, granting that both the gene editing exemption and a failure to incorporate noxious weed authority were “arbitrary and capricious” decisions by APHIS. The judge argued that there was no basis to assume all conventionally-bred plants were risk-free, and therefore that it did not make sense to use them as a baseline to determine the scope of regulatory oversight. With this decision, APHIS has currently reverted to the pre-SECURE rule regulatory structure, although this situation is widely considered unsatisfactory, and further revision is expected. Given that it took fifteen years to update the original regulation, significant delay is likely.
The products of plant genetic engineering do not present a unique risk. Instances of them becoming noxious weeds are possible but rare, and if this happens, these weeds can and should be managed. But a regulatory mindset deriving from ideals of purity and contamination has greatly inhibited the development of a field of knowledge and practice that represents unprecedented biospheric integration and is one of our best hopes for near-term carbon drawdown. Many find this a scary proposal; but it already happens in nature all the time, without our knowledge or involvement.
Nature Recognizes No Difference Between Its and Our Interventions
The black cottonwood (Populus trichocarpa) is a deciduous poplar tree common to the Pacific Northwest. A variant of it contains the composite gene BSTR, composed of three sections pasted together. The first codes for a portion of the enzyme glycosyltransferase from the bacterium Streptomyces, which lives symbiotically within poplar trees as an endophyte, the second for part of a DNA-binding protein from an ant, Trachymyrmex septentrionalis. This specific ant is a farmer, cultivating a fungus in subterranean galleries, which it feeds with poplar wood and leaf litter.
The third portion of BSTR is a significant chunk of the coding sequence for RuBisCo, the carbon-fixing protein in photosynthesis and the most abundant enzyme on Earth, deriving from the cottonwood’s own chloroplast. These three sections, when joined together in this specific pattern, form a gene whose protein product enables the photosynthetic apparatus of the tree to adapt more rapidly to changes in light intensity, leading to significant gains in its carbon fixation rate.
BSTR black cottonwood is a compelling example of the deep contingency and synthetic power of nature. That two enzyme fragments, from a prokaryote and an animal, can combine with a portion of chloroplastic RuBisCo to form a composite gene integrated into the plant’s nuclear genome is already bizarre. That such a gene could be transcribed is improbable. That its transcript could be translated to a functional protein stretches credulity. That the function of this protein should be to improve the efficiency of photosynthesis itself is outside of any interpretation in probabilistic terms. With this being said, the organism is also obviously highly transgenic, and born full-leafed from nature’s face. What can be interpreted probabilistically is the time frame when the event occurred.
The first wild tree discovered is almost certainly not the mother of the whole line. Black cottonwood, like most poplars, is proficient in asexual reproduction. Adventitious roots sprout easily from stem cuttings, the root system is laden with dormant buds, and regrowth from stumps and branches is rapid. Given this ability, the frequency of the tree in Pacific Northwest forests, and the comparatively recent development of human survey and sequencing techniques, the transformation event is likely to have occurred several millennia ago, assuming that the enhanced growth rate increases the likelihood of encountering a BSTR cottonwood over time. We also do not know whether the tree represents an ongoing replacement of unboosted cottonwoods, or if there are physiological costs we are as yet unaware of which select against BSTR in some environments, limiting its spread.
The question of how the event occurred must at this point be answered speculatively, but there are clues. A scenario involving shed cells in the salivary glands of ants harvesting living poplar stems or roots hosting Streptomyces comes to mind. Then the release of fragmented sequences during concurrent lysis of bacterial and ant cells, mixed with damaged chloroplasts, and all this happening while the genome of the tree was being actively read, perhaps during an immune response to these insect predators. Most improbable, but at least possible if repeated billions of times over millions of years, which the ongoing use of cottonwood as an ant agricultural feedstock certainly provides.
Human transformation techniques such as particle bombardment, with no method to ensure genomic integration other than physically shooting the construct into the nucleus, similarly rely on this “law of large numbers,” but even biolistics is likely far more efficient than whatever happened several thousand years ago somewhere near what is now Seattle.
There is no essential difference between the creation of a genetically engineered plant and a naturally-occurring event such as BSTR cottonwood. While acting as agents of change should correctly lead us to a sense of responsibility toward the consequences of such change, there should be no wholesale abandonment of what from my perspective is our duty, as the bearers of intentionality and concentration of energy flux in service of this intentionality, to integrate living systems in directions that move the system as a whole toward homeostasis.
While wholesale abandonment may seem a strong term, it accurately names the central human tendency to avoid direct perception of sources of shame. If we consider shame to be an essentializing abandonment of our self-perception, avoiding perceptions that cause this becomes a survival strategy. Thinking about a constructed human world abutting a given natural world, without considering their inevitable intermingling, allows us to cognitively and emotionally withdraw from our contextual reality. This framing also makes it easier to externalize costs, and neglect to register energetic dependencies, while causing action paralysis even in cases of clear necessity.
This shame also bears significant responsibility for a lacuna in ecological science regarding the study of directed human improvement of ecosystemic function. Practices such as permaculture and agroforestry give hints of such a science of applied ecology, but most ecological theory still categorizes human activity as a disturbance outside the system, rather than as an inevitable component of any modern ecology that could be actively beneficial to both us and the system as a whole when managed correctly.
While properties such as biodiversity are correctly seen as important to protect and restore, there is a surprising lack of theoretical undergirding to give quantitative descriptions of their benefits as general properties, or with predictive power regarding the effects of changes. We need general measures of ecological functionality, and responses to degradation that restore functionality, but in order to derive these, we must release the common assumption that restoration of function is synonymous with the restoration of some prior Arcadian state of affairs.
In the world as we find it, the boundaries of entire climatic zones are shifting, and the effects of humans are ubiquitous. Human assistance could enable many organisms to persist through these centuries, leveraging our intentionality in service of accelerating the evolutionary process. For example, increasing the thickness of bark of forest trees would increase carbon storage while reducing mortality from wildfires. Enhancing the downy layer on the underside of many leaves would reduce transpiration rate and improve drought tolerance.
As a sketch of what such a theoretical and regulatory environment would look like for synthetic biology, imagine a world in which we had derived stable metrics of ecological functionality, using standardized data collection and measures of energetic flux. In this world, genetically modified plants would be regulated on the level of practices rather than around speculative catastrophic contamination risks. Regulatory approval could be necessary for increments of area planted, or for commercial use or use above certain production thresholds—use that would have an energetic impact. Noxious weed regulation could remain intact, and apply regardless of the construct inserted, at the level of the species worked with.
Unlike many pressing issues of our time, this is one where the financial and regulatory obstacles are identifiable and clear, and scientific experimentation and iteration can be conducted immediately and rapidly. In fact, I have been doing so for the last five years: my public benefit corporation successfully bred and planted hybrid poplar trees that assimilated carbon from the atmosphere thirty to fifty percent faster than normal, work for which we were profiled in The New York Times. I am continuing to work with these trees, answering basic research questions about their behavior and carbon removal rate in various contexts, as one branch of the nonprofit organization Carbocene Industries. For those who wish to help rebalance planetary carbon flux and become technological gardeners of our Earth, I invite you to join me. The whole Earth is already our garden; it is high time we recognize this and treat it so.