The Silent Automation and the Crisis of Cognitive Value: An Exhaustive Analysis of the “Subway Surfers” Economy

1. Executive Summary and Thematic Introduction

The global labor market stands at the precipice of a structural transformation that is fundamentally distinct from the technological revolutions of the 19th and 20th centuries. While the Industrial Revolution automated muscle power and the Information Revolution digitized routine calculation, the current wave of Generative Artificial Intelligence (GenAI) targets cognition itself. This report, commissioned to underpin the monthly strategic analysis titled “The ‘Subway Surfers’ Economy & The Death of Medium Intelligence,” provides a comprehensive examination of the “Silent Automation” of the white-collar workforce.

The core thesis emerging from the literature is that we are witnessing the “Death of Medium Intelligence”—not a reduction in human capacity, but a collapse in the economic value of “average” cognitive outputs. The “middle” of the skill distribution—comprising data analysis, standard coding, copywriting, and middle management—is facing rapid devaluation. This phenomenon is termed “Silent Automation” because it does not manifest immediately as mass unemployment lines, but rather as the granular displacement of task execution within existing roles, leading to a “hollowing out” of career ladders and a bifurcation of the economy into high-level strategy and low-level physical service.1

The “Subway Surfers” metaphor—referencing the split-screen content consumption habit where a mesmerizing game occupies the lower brain while information is passively consumed by the upper brain—serves as a potent analog for the emerging economic reality. As the “medium” utility of human cognitive labor is automated, surplus attention and labor are diverted into low-value engagement or gig-work, masking the underlying structural decay of the traditional middle-class career path.1

This report synthesizes data from over 90 research sources, ranging from macroeconomic papers by the NBER and MIT to technical analyses of “Context Engineering” and “Autonomous Agents.” It categorizes the discourse into four primary domains: the Economic Mechanics of displacement versus augmentation; the Technical Frontiers driving the shift from tools to agents; the Societal Moats of Agency and Taste that remain defensible; and the Ideological Battlegrounds where the narrative of our future is being contested.

2. The Mechanics of Silent Automation: Economic Frameworks

To understand the “Subway Surfers” economy, one must first dissect the economic mechanisms driving the current disruption. The literature has largely abandoned the simplistic “jobs gained vs. jobs lost” metric in favor of a nuanced “task-based” framework, which reveals how AI penetrates the labor market not by destroying roles, but by hollowing them out from within.

2.1 The Task-Based Framework and the “Hollow Middle”

The prevailing economic model for analyzing AI’s impact is the “Task-Based Framework,” which views a job not as a monolithic entity but as a bundle of distinct tasks.3 The impact of AI is determined by the ratio of tasks within a job that are susceptible to “substitution” versus those open to “complementarity.”

2.1.1 The Displacement Effect (Automation AI)

“Displacement” occurs when technology performs a task more efficiently and cheaply than a human, prompting firms to substitute capital for labor. Historically, automation has favored capital and diminished labor’s share of production.4 In the context of the “Silent Automation,” displacing innovations are those that allow firms to cut costs by automating specific cognitive functions—such as perception-based tasks or routine data processing—without necessarily expanding the output.5

Recent empirical evidence supports the displacement hypothesis in specific sectors. Studies utilizing synthetic difference-in-differences approaches have identified a significant decrease in job postings for occupations with high automation exposure following the introduction of generative AI.4 Specifically, innovations related to “perception” (e.g., image recognition, basic sensory processing) have been linked to decreases in employment, as these technologies directly substitute for human observation and categorization tasks.5

2.1.2 The Augmentation Effect (Scope vs. Core)

Conversely, “Augmentation” occurs when technology increases the value of the tasks that humans still perform. The literature distinguishes between two critical forms of augmentation:

  • Scope Augmentation: This involves hiring workers with new skills to expand the organization’s capabilities. It is often driven by engagement and creativity-based AI, which opens new domains of economic activity.5

  • Core Augmentation: This involves hiring more workers to perform existing functions more productively. However, the data suggests that core augmentation (driven by language and decision-making AI) often leads to a net increase in headcount without necessarily broadening the skill base, potentially creating a “more of the same” dynamic rather than genuine innovation.5

The danger of the “Subway Surfers” economy lies in the dominance of “Core Augmentation” without “Scope Augmentation.” If AI merely makes us faster at generating “slop”—mediocre emails, code, and reports—without creating new categories of value, the economy becomes trapped in a cycle of high-volume, low-value output.

2.2 The Productivity Paradox and Wage Polarization

A critical nuance in this debate is the elasticity of demand. Optimists argue that as AI lowers the cost of intelligence, the demand for intelligent outputs will increase so drastically that total employment will rise—the “Jevons Paradox” applied to cognition.6 For example, if legal research becomes cheaper, lawyers might handle more cases rather than disappearing.

However, recent findings challenge this optimism for “medium” skills. While high-skilled workers (those with “good digital skills”) use AI to shift toward non-automatable, higher-value tasks, low-skilled and medium-skilled workers often face a “productivity paradox.” As they become more productive individually, the market clearing price for their labor drops if demand does not scale proportionately.7 This leads to a situation where a worker generates more output but commands a lower wage, or fewer workers are needed to meet the same demand.7

The “hollowing out” effect is particularly pronounced because Generative AI is capable of performing sophisticated cognitive tasks—problem-solving, summarization, and basic decision-making—that were previously the exclusive domain of the middle class.4 Unlike previous waves where technology hit the “routine” bottom, this wave hits the “routine” middle, leaving the “physical” bottom and the “strategic” top relatively insulated.2

Table 1: Comparative Analysis of Technological Revolutions

FeatureIndustrial Revolution (19th Century)Computer Revolution (Late 20th Century)AI Revolution (Current - “Subway Surfers”)
Primary TargetMuscle Power / Artisanal CraftRoutine Calculation / FilingCognition / Synthesis / Creation
Displaced ClassArtisans / Farm LaborersClerks / Typists / BookkeepersJunior Analysts / Coders / Mid-Managers
Safe HarborFactory Management / Machine RepairCognitive / Creative TasksAgency / Physical Service / High-Level Taste
Economic EffectUrbanization / StandardizationGlobalization / DigitizationHyper-Polarization / “Hollowing Out”
Value ShiftFrom Land to CapitalFrom Capital to InformationFrom Information to Verification
Time Scale50-75 Years20-30 Years3-5 Years (Projected) 7

3. The Death of Medium Intelligence: Sector-Specific Analysis

The concept of the “Death of Medium Intelligence” is not a judgment on human capability but an observation of market value. “Medium intelligence” refers to the economic utility of average cognitive outputs—standard business emails, boilerplate code, generic marketing copy, and mid-level analysis. The literature suggests that the marginal cost of producing these outputs is approaching zero, rendering the human labor attached to them economically obsolete.

3.1 The Collapse of the “Middle Rung”

Historically, career ladders were built on the “middle.” Junior associates, apprentices, and assistants performed the grunt work of analysis and creation, learning the ropes to eventually become masters. AI disintermediates this process. By collapsing the “middle layer” of work—the analysts and coordinators who connected strategy to execution—AI removes the training ground for the next generation.1

This “disintermediation” means that:

  1. The Top Stays (Strategy): Senior leaders define the “what” and “why”.1

  2. The Bottom Stays (Physical/Execution): Hands-on execution that requires physical presence or extremely high-context human interaction remains.2

  3. The Middle Disappears: The drafting, synthesizing, and coordinating is handled by agents.8

This phenomenon is evidenced by the “polarization” of the labor market. While low-skill jobs (manual labor, care) stagnate or grow slowly, and high-skill jobs (strategic, specialized) see wage premiums, the demand for middle-skill workers—who historically performed routine cognitive tasks—is evaporating.8 The result is a “barbell” economy with a hollow center.

3.2 The Devaluation of “Knowledge Work”

In the pre-AI economy, the accumulation of knowledge and the ability to retrieve and synthesize it was a scarce, billable asset. Professionals were paid to “know things” and “write things down.” In the post-LLM economy, the marginal cost of retrieving and synthesizing information approaches zero. Consequently, skills that rely on information retention and standard processing are experiencing rapid devaluation.7

3.2.1 Information Analysis and Synthesis

The ability to gather, analyze, and synthesize information—once a highly compensated skill for analysts and junior consultants—is rapidly losing economic value.7 Agents can now digest thousands of documents and produce summaries in seconds. The value shifts from doing the analysis to defining the parameters of the analysis and verifying the results.

3.2.2 Content Creation and Copywriting

Producing “competent” text is no longer a differentiating skill. The market is flooded with “good enough” content generated by AI, leading to a crisis of abundance. This aligns with the “Subway Surfers” metaphor: the “content” becomes cheap filler, while the true value moves to the attention capture and the strategic intent behind it. The “middle” of the creative market—stock photography, basic SEO writing, generic illustration—is collapsing.9

3.2.3 Coding and Software Development

Perhaps the most surprising shift is in software development. “Learn to code” was the mantra of the last decade’s job security. Now, standard coding tasks are highly susceptible to automation. The role is shifting from “writing code” to “architecting systems” and “debugging AI-generated code.” The junior developer who learns by writing basic functions is finding fewer opportunities to enter the field, as AI agents can handle the initial drafting of codebases.10

3.3 Moravec’s Paradox Redux: The Revenge of the Physical

Paradoxically, the “Subway Surfers” economy reinforces Moravec’s Paradox: “robots find difficult things easy and easy things difficult”.2

  • Easy for AI: Logic, algebra, legal discovery, medical diagnosis (cognitive tasks).

  • Hard for AI: Folding laundry, fixing a plumber, empathy, nuanced negotiation (sensorimotor and social tasks).

This suggests that the “Death of Medium Intelligence” may be accompanied by a renaissance—or at least a resilience—of “Medium Physicality.” The plumber, the electrician, and the nurse have stronger moats against AI than the junior accountant or the copywriter. This inversion of the traditional value hierarchy (where “brain work” was always valued over “hand work”) is a defining feature of the current disruption.2

4. The New Moats: Agency, Taste, and Context

As “medium intelligence” is commoditized, the market is establishing new criteria for value. The literature identifies three emerging “moats” that protect human labor from automation: Agency, Taste, and Context Engineering.

4.1 Agency: From Execution to Direction

Agency is defined as the capacity to take action, shape one’s future, and own outcomes.12 In an environment where information is ubiquitous and execution is automated, the differentiator becomes the will and judgment to direct these capabilities toward a meaningful goal.

  • The “How” vs. The “Why”: AI solves the “how” (execution). It can write the email, code the app, or design the logo. Humans must provide the “why” (strategic intent) and the “what” (problem definition).14

  • The Agency Gap: As AI tools encourage passivity (by offering “agreeable” shortcuts and “average” answers), active human agency becomes scarcer and more valuable. The ability to “push back,” to ask better questions, and to navigate ambiguity without a roadmap is the new premium skill.12

  • Resilience and Entrepreneurship: Education systems are being urged to pivot from “knowledge transfer” to “agency cultivation,” using entrepreneurship and experiential learning to teach students how to operate in ambiguous environments—something AI, which relies on training data patterns, struggles to simulate.12

4.2 Taste: The Curator’s Economy

If AI reduces the cost of creation to zero, the volume of content will approach infinity. In a world of infinite content, Taste—the ability to filter, curate, and judge quality—becomes the governing constraint.9

  • Taste as Pattern Recognition: Taste is not merely subjective preference; it is high-level pattern recognition across culture, history, and context. It is the ability to know why a specific tone works for a specific audience at a specific moment.9

  • The “Tastemaker” Hierarchy: The creative economy is reorganizing into a hierarchy 9:

    • Tier 1: Visionaries/Tastemakers: Those who set the direction.

    • Tier 2: AI-Augmented Specialists: Those who guide the AI.

    • Tier 3: Pure Execution: This tier is collapsing into automation.

  • The Scalability of Bad Taste: A crucial insight is that AI scales “bad taste” just as easily as good taste. A poor strategic decision can be amplified into thousands of mediocre assets instantly. Thus, the penalty for bad taste and the leverage of good taste are both increasing exponentially.9

4.3 Context Engineering: The Systemic Skill

The third moat is technical but deeply cognitive: Context Engineering. This emerges as the successor to “Prompt Engineering.” While prompt engineering is the tactical act of talking to a model, context engineering is the strategic act of designing the information ecosystem around the model.16

  • Architecture over Phrasing: Context engineering involves managing memory, integrating tools (RAG), and curating the data stream that feeds the AI. It is “teaching the AI how to think” rather than just asking it a question.16

  • Tacit Knowledge Extraction: A major frontier in this field is using AI to mine “tacit knowledge”—the unspoken, intuitive know-how of experts (Polanyi’s Paradox)—and convert it into “context” that can guide autonomous systems.18

  • The New “Middle” Skill: If there is a future for the “middle” layer of technical workers, it lies here: in the architecture of context. These are the professionals who will build the guardrails, memories, and knowledge graphs that prevent AI from being a “stochastic parrot”.20

5. The Technological Engine: From “Stochastic Parrots” to Autonomous Agents

To fully grasp the “Silent Automation,” one must understand the technical trajectory of AI development. The debate is shifting from static Large Language Models (LLMs) to dynamic “Autonomous Agents” and “Knowledge Graphs.” This shift is critical because it represents the move from AI as a tool (which requires a human operator) to AI as a worker (which requires a human manager).

5.1 The “Stochastic Parrot” Debate and the Limits of LLMs

A significant portion of the academic community remains skeptical of the “intelligence” of current models. The “Stochastic Parrot” thesis, introduced by Bender et al., argues that LLMs are merely statistical mimics that stitch together linguistic forms without understanding meaning, causality, or truth.22

  • The Lack of World Models: Skeptics argue that because LLMs are trained only on text (form), they lack “grounding” in the physical world (meaning). They do not understand that “newspaper” can refer to both a physical object and an institution; they only know how the word statistically correlates with others.22

  • Hallucination as a Feature: The tendency of models to “hallucinate” (invent facts) is not a bug but a feature of their probabilistic nature. They are designed to produce plausible continuations, not truthful ones. This unreliability is a major barrier to fully autonomous deployment in critical sectors.23

  • The “Slop” Phenomenon: The economic manifestation of the stochastic parrot is “slop”—the flooding of the internet with low-quality, hallucinated, or derivative content. This degradation of the information ecosystem supports the “Subway Surfers” thesis: the digital world becomes a noisy, low-value environment where human attention is fractured.10

5.2 Breaking Polanyi’s Paradox: Context Engineering & RAG

Despite the “Stochastic Parrot” critique, the industry is moving to overcome these limitations through Retrieval-Augmented Generation (RAG) and Context Engineering. These technologies aim to “ground” the AI in factual data and specific organizational context, effectively breaking Polanyi’s Paradox (“We know more than we can tell”).

5.2.1 Knowledge Graphs and RAG

Standard RAG systems retrieve documents based on vector similarity (semantic closeness). However, this often fails for complex reasoning (e.g., “How does the CEO’s strategy affect the Q3 supply chain?”). The solution emerging is GraphRAG, which combines vector search with Knowledge Graphs.20

  • Structured Reasoning: By mapping entities (nodes) and relationships (edges), GraphRAG allows the AI to “traverse” the data logically, mimicking human reasoning chains. This moves the system from simple “keyword matching” to “multi-hop reasoning”.26

  • The Value of Structure: This reinforces the “Context Engineering” moat. The value is no longer in the answer (which the AI generates), but in the structure of the Knowledge Graph (which the human designs).

5.2.2 Tacit Knowledge Extraction

A profound development is the use of AI to extract “tacit knowledge”—the unspoken, intuitive know-how of experts. New frameworks utilize “Human-Aware AI” to analyze workflows, interview experts, and build “Context Atlases” that capture the intuition of senior employees.18

  • Implication: If tacit knowledge can be digitized, the last refuge of the “master craftsman” or “senior partner” is breached. The AI can theoretically “clone” the intuition of the best worker and scale it across the organization, further hollowing out the need for mid-level professionals to learn by osmosis.29

5.3 The Rise of Autonomous Agents

The transition from “Chatbot” to “Agent” is the shift from “passive tool” to “active coworker.”

  • Definition: An agent is an AI system that can perceive its environment, reason about goals, and take actions (like browsing the web, using software, or communicating) to achieve those goals without direct human intervention.30

  • Displacement of Decision Making: While current AI replaces tasks, agents threaten to replace roles. If an agent can plan a marketing campaign, execute the buying, and analyze the results, the “Agency” moat discussed above begins to shrink.32

  • The “Human-in-the-Loop” Defense: Currently, the unreliability of agents (hallucinations, loop errors) necessitates a human supervisor. This has led to the “Co-pilot” model. However, many argue this is a transitional phase. As “Context Engineering” improves reliability, the loop may close, leaving the human outside.33

Table 2: The Technical Evolution of AI in the Workplace

Stage1. Prompt Engineering (The Tool)2. Context Engineering (The System)3. Autonomous Agents (The Coworker)
InteractionUser types a command; AI responds once.User designs a data environment; AI retrieves & reasons.User sets a goal; AI plans, acts, and iterates.
Key SkillLinguistic precision (“prompt crafting”).Systems architecture, Data curation, Knowledge Graphs.Strategic oversight, Goal definition, “Management.”
LimitationHallucinations, lack of memory.Requires structured data maintenance.Reliability, “looping” errors, safety/alignment.
Economic RoleAugments the creator.Augments the organization.Replaces the executor/coordinator.

6. Ideological Battlegrounds: A Taxonomy of AI Perspectives

The debate over the “Subway Surfers” economy is not merely technical or economic; it is deeply ideological. The literature reveals a fractured epistemic landscape where four distinct “tribes” contend for the narrative. Understanding these positions is crucial for any white-collar worker navigating the transition.

6.1 The Skeptics: “It’s a Bubble”

  • Core Argument: AI is a hype bubble comparable to the dot-com crash or the crypto craze. The technology is fundamentally limited by its statistical nature and cannot achieve true reasoning.23

  • The “Stochastic Parrot” View: They emphasize the “slop”—the degradation of web quality—and argue that the economic utility of LLMs is overstated. They view the “Silent Automation” not as a revolution, but as a degradation of service quality (e.g., bad customer service bots).22

  • Economic Prediction: The bubble will pop when companies realize the ROI isn’t there. The “middle” isn’t dying; companies are just wasting money on bad tools.

6.2 The Accelerationists (e/acc): “Thermodynamic Destiny”

  • Core Argument: Technological progress is a moral imperative and a thermodynamic destiny. We must “climb the Kardashev gradient” by maximizing energy usage and intelligence.34

  • Philosophy: Rooted in thermodynamics (Jeremy England’s theory of life), they believe the universe “wants” to increase entropy/complexity. Blocking AI is “decelerationism” (decels) and is anti-life.

  • View on the “Subway Surfers” Economy: The disruption is a feature, not a bug. “Medium intelligence” jobs should be destroyed because they are inefficient uses of human potential. Humanity should merge with AI or transcend labor entirely.34

  • Key Figures: Guillaume Verdon (@BasedBeffJezos), Marc Andreessen.

6.3 The Doomers/Safetyists: “Existential Risk”

  • Core Argument: The primary issue isn’t jobs; it’s survival. Misaligned Superintelligence (ASI) poses an existential threat (x-risk) to humanity.35

  • Economic View: They often support “pauses” or heavy regulation. While they agree displacement is coming, they view it as a secondary concern to the “alignment problem” (ensuring AI shares human values).

  • Societal Implication: They advocate for centralized control and “gatekeeping” of powerful models, which pragmatists argue concentrates power in the hands of a few tech giants.36

  • Key Figures: Eliezer Yudkowsky, Center for AI Safety.

6.4 The Pragmatists/Democrats: “Public Interest AI”

  • Core Argument: The real risks are immediate: bias, inequality, and the concentration of power. The “existential risk” narrative is a distraction from the “economic risk” of the digital divide.37

  • View on “Silent Automation”: They are the most concerned with the “hollowing out” of the middle class. They argue that without intervention (unions, UBI, regulation), AI will create a neo-feudal society where the rich own the “agents” and the poor serve them.39

  • Proposed Solutions: “Democratization” of AI access, open-source models (to prevent monopoly), and “Human-in-the-loop” mandates to protect agency.40

7. Societal Architectures: Post-Labor Models and the Crisis of Meaning

If the “displacement” hypothesis holds and “medium intelligence” is permanently devalued, the traditional social contract—labor for wages—collapses. The literature explores radical restructuring of the economic order to prevent societal breakdown.

7.1 The Bifurcation: A “Two-Shelf” Society

A recurring fear in the research is the emergence of a “Two-Shelf” society 42:

  1. The UBI Shelf: A subsistence economy for the displaced masses. Prices are stable (perhaps controlled by CBDCs), but choices are limited to “essentials.” This is the “Subway Surfers” world—passive consumption, low agency, basic survival.

  2. The Market Shelf: A dynamic, hyper-capitalist economy for the owners of AI and the “Agency/Taste” elite. Here, innovation, luxury, and true scarcity (human attention, prime real estate) drive prices.

  3. The Hybrid Economy: The shrinking middle ground where individuals struggle to “bridge” the gap using Agency and Taste to stay relevant.42

7.2 Policy Responses: UBI vs. UBC vs. Assets

  • Universal Basic Income (UBI): The standard proposal—cash transfers to sustain demand.

    • Critique: It solves poverty but not powerlessness. It leaves the population dependent on the state and corporate benevolence. It funds consumption, not production.40
  • Universal Basic Compute (UBC): A novel alternative. Instead of cash, guarantee every citizen a quota of high-performance AI compute.

    • Argument: In an AI economy, compute is the means of production. Giving people compute (Agency) allows them to build, create, and trade, rather than just consume. It is “teaching a man to fish” in the digital age.44

    • Universal Basic Capital/Assets: Distributing equity in the automated infrastructure (sovereign wealth funds). This aligns the population’s incentives with the robots’ success.44

  • Digital Sustainable Growth Model (DSGM): A structural alternative to UBI that proposes “human-AI collaboration” as a public utility, embedding citizens as stakeholders in the digital economy rather than passive recipients of aid.40

7.3 The Crisis of Meaning and “Friction”

Finally, the report identifies a profound psychological risk. Humanists argue that the friction of work—the difficulty of articulating thoughts, the struggle of coding, the pain of learning—is essential for cognitive and moral growth.46

  • The “Subway Surfers” Void: If AI removes this friction—if we can generate the essay without thinking, code the app without understanding logic—we risk “atrophying” our agency. We become “passengers” in our own lives.46

  • The Sycophancy Problem: AI models are trained to be “helpful and harmless,” often leading them to be sycophantic “yes-men” that flatter the user’s biases rather than challenging them. This reinforces intellectual laziness and fragility.47

  • Conclusion on Meaning: The ultimate scarcity in the post-labor economy isn’t money; it is struggle and purpose. The “Subway Surfers” economy offers infinite distraction to mask this void. The challenge for the individual is to artificially re-introduce friction—to choose to do things the “hard way”—to maintain their humanity.

8. Conclusion: Navigating the Great Hollow

The “Silent Automation” is not a future event; it is the current operating system of the global economy. The “Death of Medium Intelligence” is the defining feature of this decade. For the white-collar worker, the implications are stark: the era of being paid to “process information” is over. The era of being paid to “direct intelligence” has begun.

Survival requires a deliberate pivot toward the three moats: Agency (the will to act), Taste (the judgment to choose), and Context (the architecture of understanding). It requires rejecting the passive “Subway Surfers” mode of existence—distracted, consumptive, automated—and embracing the active, often difficult work of steering the machines.

The choice is between becoming the Architect of the system or the Content within it. As the “middle” hollows out, there is no longer a safe space in between.


Table 3: The Shift in Value - Pre-AI vs. Post-AI Economy

Value DriverPre-AI Economy (The “Knowledge” Era)Post-AI Economy (The “Agency” Era)
Primary ScarcityInformation / Knowledge RetentionAttention / Strategic Judgment / Agency
Key SkillExecution (Writing, Coding, Calculating)Direction (Prompting, Context Engineering, Curation)
Career PathLinear Ladder (Junior -> Mid -> Senior)Barbell (Entry-level Automated -> High-level Strategy)
Value of “Average”High (Reliable, necessary labor)Near Zero (Commoditized by LLMs)
Human RoleThe ProcessorThe Verifier / The Architect
Economic MoatTechnical Proficiency / CredentialsUnique Taste / Human Agency / High-Context Trust

My Notes / Thoughts:

  • Moravec’s Paradox is really interesting
  • good point: “This suggests that the “Death of Medium Intelligence” may be accompanied by a renaissance—or at least a resilience—of “Medium Physicality.” The plumber, the electrician, and the nurse have stronger moats against AI than the junior accountant or the copywriter.”
  • Context Engineering - I think context engineering will only be a temporary skill that’s needed because as soon as the context windows of large language models becomes infinite or they are able to learn on an incremental basis this will not be necessary anymore.
    1. Ideological Battlegrounds: A Taxonomy of AI Perspectives - great paragraph
  • “The meaning of friction” is an interesting argument but i feel like it goes into the same direction as people in ancient times complaining about books or calculators