In an era defined by rapid urbanisation and mounting concerns over food security, cities around the world are undergoing a quiet but profound transformation. Rooftops, abandoned warehouses, underground tunnels and glass-clad high-rises are being repurposed as sites of food production, giving rise to what commentators have termed the "urban farming revolution." Far from being a peripheral trend, urban agriculture now represents a significant and growing sector of the global food system, attracting investment from technology firms, government agencies and environmental organisations alike.
The most visible manifestation of this shift is the proliferation of vertical farms — multi-storey facilities in which crops are cultivated in stacked layers under precisely controlled conditions. Unlike conventional agriculture, vertical farms rely on LED lighting tuned to specific wavelengths, hydroponic or aeroponic irrigation systems, and climate-control technology to optimise growth cycles year-round. Proponents argue that these facilities use up to 95 per cent less water than field agriculture, eliminate the need for pesticides, and can produce harvests in a fraction of the time required by outdoor cultivation. Companies such as AeroFarms in Newark, New Jersey, and Bowery Farming in New York have demonstrated that leafy greens, herbs and certain fruiting vegetables can be grown at commercial scale in the heart of dense metropolitan areas.
Rooftop gardens occupy a different but complementary niche within the urban food landscape. From community allotments in London to commercial greenhouse installations atop Parisian supermarkets, these spaces convert otherwise underutilised building surfaces into productive agricultural land. Beyond food production, rooftop gardens provide measurable environmental benefits: they reduce the urban heat island effect, improve air quality by absorbing particulate matter, and attenuate stormwater runoff, thereby reducing pressure on city drainage systems. In Montreal, a 2021 municipal survey found that buildings hosting rooftop gardens recorded interior temperatures up to 3°C lower during summer peak periods, generating appreciable energy savings.
Community plots and urban allotments serve yet another function, one that is as much social as economic. Distributed across city parks, housing estates and vacant lots, these shared growing spaces give residents direct access to fresh produce while simultaneously fostering community cohesion. Research conducted by the University of Exeter found that participants in community gardening programmes reported significantly lower levels of anxiety and depression compared with control groups, suggesting that urban farming carries tangible mental health benefits. Furthermore, in low-income neighbourhoods where fresh vegetables are scarce and expensive — so-called "food deserts" — community plots can meaningfully supplement household nutrition at minimal cost.
The economic dimensions of urban agriculture are equally compelling. A comprehensive analysis published by the Food and Agriculture Organisation in 2022 estimated that the global urban farming market was worth in excess of $150 billion and was projected to expand at an annual compound growth rate of roughly 14 per cent through 2030. Job creation represents one important economic benefit: a single mid-sized vertical farm typically employs between 40 and 80 full-time staff, and the broader ecosystem of equipment manufacturers, software developers, logistics operators and distributors amplifies this employment multiplier. Several cities, including Singapore, Amsterdam and Chicago, have incorporated urban farming into their official economic development strategies, offering tax incentives, low-interest loans and streamlined planning permissions to attract investment.
Critics, however, urge caution. High capital and operational costs remain formidable barriers, particularly for vertical farms that consume substantial quantities of electricity. Unless that electricity derives from renewable sources, the carbon footprint of indoor-grown produce can rival or exceed that of conventionally farmed alternatives transported over long distances. There are also concerns about the displacement of traditional farmers and the corporatisation of food production, as venture-backed firms with deep pockets outcompete smaller, community-based initiatives. Regulatory frameworks in many jurisdictions have yet to catch up with the pace of innovation, creating uncertainty for investors and operators alike.
Despite these challenges, the trajectory of urban farming appears firmly upward. Advances in photovoltaic technology are progressively reducing the energy cost of artificial lighting. Sensor networks and artificial intelligence are enabling ever-finer control of growing conditions, boosting yields and reducing waste. Meanwhile, growing consumer demand for locally sourced, pesticide-free produce is creating robust market conditions for urban growers. Taken together, these developments suggest that the integration of food production into the urban fabric — once the preserve of utopian planners — is rapidly becoming an economic and environmental necessity.
ASleep is among the most fundamental of all biological processes, yet it remains one of the least understood. For much of human history, sleep was regarded as a passive state — a mere cessation of waking activity. That view was overturned definitively in the twentieth century when researchers equipped with electroencephalographs discovered that the sleeping brain is far from dormant. Rather, it cycles through a series of distinct stages, each characterised by unique patterns of neural activity, and each serving specific physiological and cognitive functions.
BCentral to the regulation of sleep is the circadian rhythm — the internal biological clock that governs the timing of sleep and wakefulness over a roughly 24-hour cycle. Located in the suprachiasmatic nucleus of the hypothalamus, this master clock is entrained primarily by light. When photoreceptors in the retina detect the blue wavelengths characteristic of daylight, they transmit signals that suppress the secretion of melatonin, a hormone produced by the pineal gland that promotes sleepiness. As ambient light dims toward evening, melatonin levels rise, triggering the cascade of physiological changes — a drop in core body temperature, reduced heart rate, decreased alertness — that prepare the body for sleep. Disruption of this rhythm, whether through shift work, transmeridian travel or the pervasive blue-light emission of digital screens, has been linked to a wide spectrum of adverse health outcomes.
CSleep itself is not uniform. A normal night of sleep comprises four to six cycles, each lasting approximately 90 minutes, and each moving through four distinct stages. The first three stages constitute non-rapid eye movement (NREM) sleep, progressing from light sleep (stages 1 and 2) to slow-wave or deep sleep (stage 3), characterised by the synchronised, high-amplitude delta waves visible on an EEG. It is during slow-wave sleep that the body undertakes its most intensive physical repair: growth hormone is secreted, immune function is consolidated, and cellular damage is addressed. The fourth stage — rapid eye movement (REM) sleep — is associated with vivid dreaming, heightened limbic activity, and a paradoxical inhibition of skeletal muscle movement. REM sleep appears critical for emotional regulation and the consolidation of declarative memory, the type of memory concerned with facts and events.
DThe consequences of inadequate sleep are both extensive and well-documented. Cognitively, even a single night of shortened sleep measurably impairs sustained attention, working memory and executive function. A landmark study at the University of Pennsylvania demonstrated that subjects restricted to six hours of sleep per night for two weeks showed performance deficits on psychomotor vigilance tasks equivalent to those observed after 24 hours of total sleep deprivation — yet the subjects themselves reported feeling only mildly impaired, suggesting a troubling disconnection between subjective perception and objective impairment. Physiologically, chronic sleep restriction is associated with dysregulation of the hormones leptin and ghrelin, which control appetite and satiety, contributing to increased caloric intake and a heightened risk of obesity. Elevated inflammatory markers, impaired glucose metabolism and increased cortisol secretion further implicate insufficient sleep in the aetiology of cardiovascular disease and type 2 diabetes.
EModern societies face what sleep scientists have described as an epidemic of sleep deprivation. Surveys conducted across industrialised nations consistently reveal that between 30 and 40 per cent of adults sleep fewer than the seven to nine hours recommended by the American Academy of Sleep Medicine. The causes are manifold: demanding work schedules that erode sleep time at both ends of the night, the always-on connectivity of smartphones and social media platforms, artificial lighting that delays the nocturnal rise of melatonin, and a cultural ethos in many professional environments that tacitly valorises long working hours at the expense of rest. The economic consequences of this deficit are substantial. A report by the RAND Corporation estimated that the United States loses approximately 411 billion dollars annually in productivity as a result of sleep deprivation among its workforce.
FAddressing the sleep crisis will require interventions at multiple levels. At the individual level, sleep hygiene practices — consistent sleep and wake times, a cool and dark sleeping environment, restriction of caffeine after midday, and limitation of screen exposure in the hour before bed — have demonstrated efficacy in improving sleep quality. At the organisational level, there is growing evidence that later school start times meaningfully improve adolescent sleep duration and academic performance; several school districts in the United States and United Kingdom have implemented such changes with encouraging results. At the societal level, researchers argue for a fundamental reappraisal of the cultural attitudes that treat sleeplessness as a virtue, and for regulatory measures — such as limits on mandatory overtime — that create structural conditions conducive to adequate rest.
The development of quantum computing represents perhaps the most ambitious engineering undertaking of the early twenty-first century. While classical computers process information as binary digits — bits that exist in one of two states, zero or one — quantum computers exploit the counterintuitive principles of quantum mechanics to process information in ways that are, in certain respects, fundamentally different. The theoretical foundations laid by physicists over the preceding century are now being translated, with extraordinary difficulty, into machines that may ultimately transform fields from cryptography to drug discovery.
The basic unit of quantum information is the qubit. Unlike a classical bit, a qubit can exist in a superposition of both zero and one simultaneously, meaning that a system of n qubits can, in principle, represent 2ⁿ states at once. This property confers upon quantum computers an exponential advantage for specific categories of problem. A second key principle is entanglement: when two qubits become entangled, the state of one instantaneously influences the state of the other, regardless of the physical distance separating them. Einstein famously dismissed this phenomenon as "spooky action at a distance," yet it is precisely this non-local correlation that quantum algorithms exploit to process information with a parallelism impossible in classical systems.
The theoretical case for quantum computing was substantially advanced in the 1990s. In 1994, mathematician Peter Shor published an algorithm demonstrating that a sufficiently powerful quantum computer could factorise large integers in polynomial time — a task believed to be computationally intractable for classical machines. Since the security of widely used public-key cryptographic protocols, including RSA, rests on the presumed difficulty of this factorisation problem, Shor's algorithm sent shockwaves through the cryptographic community and catalysed intense government interest in the technology. The following year, Lov Grover at Bell Laboratories demonstrated a quantum algorithm for searching unsorted databases that achieves a quadratic speedup over the best possible classical algorithm, suggesting broad applicability beyond cryptography.
Despite the theoretical promise, realising quantum computing in hardware has proved extraordinarily challenging. Qubits are inherently fragile: any interaction with the surrounding environment — vibrations, electromagnetic interference, even stray photons — can disrupt the delicate quantum states through a process known as decoherence. Maintaining qubits in a coherent state long enough to perform useful computations requires isolation conditions of extreme stringency, typically including temperatures within a few millikelvin of absolute zero, lower than those found anywhere in the known universe. Contemporary quantum processors, such as those developed by Google, IBM and a growing cohort of specialist firms, typically operate at around 15 millikelvin, achieved through dilution refrigerators of considerable engineering sophistication.
Error correction presents a further formidable obstacle. Because physical qubits are prone to errors introduced by decoherence and gate imperfections, practical quantum computation requires encoding each logical qubit across many physical qubits, with the overhead depending on the target error rate and the error-correction protocol employed. Current estimates suggest that a fault-tolerant logical qubit may require anywhere from hundreds to thousands of physical qubits to sustain. In 2019, Google announced that its 53-qubit Sycamore processor had achieved "quantum supremacy" by completing a specific computational task in 200 seconds that would require approximately 10,000 years on the most powerful existing classical supercomputer — a claim contested by IBM, which argued that an optimised classical simulation would require only two and a half days. The episode highlighted both the genuine progress being made and the difficulty of establishing unambiguous benchmarks for quantum advantage.
Researcher John Preskill, who coined the term "quantum supremacy" in 2012, has since advocated for the more measured concept of "quantum advantage" — the point at which quantum computers outperform classical alternatives on commercially or scientifically relevant tasks. Identifying such tasks is itself a non-trivial challenge: the categories of problem that admit of quantum speedup are narrower than early popular accounts suggested, and many candidate applications require error rates far below those currently achievable. Nonetheless, leading research groups have identified promising near-term applications in quantum chemistry (simulating molecular interactions to accelerate drug and materials discovery), optimisation (solving logistics and financial modelling problems), and machine learning (accelerating certain training algorithms).
The geopolitical dimensions of quantum computing add urgency to the technological race. Recognising the strategic implications of a cryptographically relevant quantum computer — one capable of breaking current encryption standards — governments in the United States, China, the European Union and elsewhere have committed multi-billion dollar programmes to accelerate development. In parallel, standardisation bodies such as the National Institute of Standards and Technology have been working to develop post-quantum cryptographic standards — classical algorithms resistant to quantum attack — as a hedge against the eventual arrival of cryptographically capable machines. Michelle Simmons, founder of Silicon Quantum Computing in Australia, has argued that the nation which achieves full fault tolerance first will hold decisive advantages in both national security and economic competitiveness, framing quantum computing as the defining technological contest of the coming decades.
The timeline to widespread practical deployment remains deeply uncertain. Optimistic projections from leading technology companies speak of fault-tolerant systems within a decade; more cautious academic assessments suggest that commercially transformative quantum computers lie 20 to 30 years away, if they prove feasible at all. What seems clear is that the field has passed the point of being merely theoretical: billions of dollars of investment, tens of thousands of researchers and the full resources of several major nation-states are now committed to making quantum computing a practical reality.
Do the following statements agree with the information given in the passage? Select TRUE, FALSE, or NOT GIVEN.
Answer the questions below. Write NO MORE THAN THREE WORDS from the passage.
Match the following headings to paragraphs A–F.
Match each researcher/concept (27–31) with their contribution or description (A–E).
Complete the sentences using NO MORE THAN TWO WORDS from the passage.