<?xml version="1.0" encoding="utf-8"?>
<feed xmlns="http://www.w3.org/2005/Atom">
  <title>CitedMind Feed</title>
  <link href="https://citedmind.com/" rel="alternate"/>
  <link href="https://citedmind.com/feed.xml" rel="self"/>
  <updated>2026-05-11T00:00:00Z</updated>
  <id>https://citedmind.com/</id>
  <subtitle>Exploring philosophy and trust in AI-powered knowledge synthesis</subtitle>
  
  <entry>
    <title>The Reader Who Didn&#39;t Read: How AI Synthesis Transforms Knowing</title>
    <link href="https://citedmind.com/posts/2026/05/11/the-reader-who-didnt-read/"/>
    <updated>2026-05-11T00:00:00Z</updated>
    <id>https://citedmind.com/posts/2026/05/11/the-reader-who-didnt-read/</id>
    <summary>In the early 2000s, Xerox photocopiers shipped with a compression algorithm designed to save storage space on scanned documents. The algorithm worked well enough that nobody noticed when it introduced a subtle error: in documents containing three different numerical values in close proximity — say, three room areas on a floor plan — the compressor sometimes replaced all three with the same number....</summary>
    <content type="html"><p>In the early 2000s, Xerox photocopiers shipped with a compression algorithm designed to save storage space on scanned documents. The algorithm worked well enough that nobody noticed when it introduced a subtle error: in documents containing three different numerical values in close proximity — say, three room areas on a floor plan — the compressor sometimes replaced all three with the same number. The values 14.13, 21.11, and 17.42 would silently become 14.13, 14.13, and 14.13. The compressed file was smaller. It looked identical to the original on casual inspection. But the information it carried was structurally different, and the people relying on it had no way to detect the substitution without the original document in hand.</p>
<p>Ted Chiang, writing in <em>The New Yorker</em> in 2023, used this story as an analogy for how large language models handle the web's information. A JPEG of the web, he called ChatGPT — a lossy compression that preserves the general shape of things while silently discarding the details. The analogy was powerful, but it may not go far enough. The Xerox copier compressed a static image. AI compresses <em>understanding itself.</em> And the reader who relies on the compression is not just working with slightly altered data. They are experiencing a fundamentally different epistemic state — one whose difference from original reading is not a matter of degree, but of kind.</p>
<h2 id="the-destination-without-the-map" tabindex="-1">The Destination Without the Map <a class="header-anchor" href="#the-destination-without-the-map">#</a></h2>
<p>The conventional defense of AI synthesis is straightforward: a good summary captures the essential content. If the summary faithfully represents what the original says, the reader who reads the summary knows the same thing the reader who read the original knows. The difference is efficiency, not epistemology. You arrived at the destination by a faster route, but you stand in the same place.</p>
<p>This defense collapses under scrutiny from two directions simultaneously: one cognitive, one philosophical.</p>
<p>The cognitive direction begins with the work of Danielle McNamara and her Self-Explanation Reading Training (SERT) framework, published in <em>Discourse Processes</em> in 2004. Across a series of studies, McNamara demonstrated that deep comprehension is not a matter of receiving information — it is a matter of constructing a mental model. Readers who generate self-explanations during reading, who make inferences, who ask why a claim follows from previous claims — these readers show significantly better retention and transfer than readers who passively process the same material. The act of constructing understanding is itself constitutive of what is understood. The cognitive effort of reading is not a tax on comprehension. It is the engine of comprehension.</p>
<p>When an AI reads a source and produces a summary, it does the construction work. It draws the inferences. It makes the connections. It formulates the explanations. The reader receives the output of this cognitive process — the destination — but has not traveled the route. And the SERT research strongly suggests that the route is where the understanding lives. The summary gives you the claim without the reasoning architecture that makes the claim meaningful. You know <em>that</em> something is true. You do not know <em>why</em> it is true in the way that someone who traced the argument from premise to conclusion knows why.</p>
<p>The philosophical direction sharpens this distinction. Propositional knowledge — knowing-that — is only one dimension of understanding. There is also structural knowledge: knowing-why, knowing-how, knowing-when-to-qualify. A person who reads the original source knows not just the conclusions but also the reasoning chain, the caveats, the methodology's limitations, the author's hedging, the scope conditions under which the claim holds. The synthesis gives you a static claim where the original gave you a dynamic system of reasoning. The question is not whether the claim is accurate. The question is whether possessing the claim without the reasoning system around it constitutes knowing the same thing.</p>
<p>It does not.</p>
<h2 id="the-google-effect-and-the-outsourcing-of-memory" tabindex="-1">The Google Effect and the Outsourcing of Memory <a class="header-anchor" href="#the-google-effect-and-the-outsourcing-of-memory">#</a></h2>
<p>The cognitive consequences of synthesis consumption are compounded by a second mechanism, documented by Betsy Sparrow, Jenny Liu, and Daniel Wegner in their landmark 2011 <em>Science</em> paper on the Google effect. The researchers found that when people expect to have future access to information via computer search, their recall of the information itself drops significantly. Instead, they remember <em>where</em> to find it — the search path, the storage location, the retrieval method. The Internet becomes a form of transactive memory, and the process of knowing when and how to search becomes part of the cognitive process itself.</p>
<p>The Google effect applies to AI synthesis with particular force, and the mechanism extends beyond simple recall. When a reader knows they can ask an AI to summarize any text, the structural engagement with the source material becomes optional. Why build a mental model when the model is one click away? Why trace the reasoning chain when the conclusion is already extracted? The synthesis becomes not a supplement to reading but a substitute for it — and the substitute does not preserve the cognitive state that reading would have produced.</p>
<p>There is an important subtlety here. The Google effect study measured recall of <em>facts</em>. The AI synthesis effect we are describing involves something deeper: the encoding of <em>structure</em>. Research participants who read an original study and participants who read an AI summary of the same study may perform equally well on a factual quiz about the conclusions. But ask them to explain <em>how</em> the authors arrived at those conclusions, or to identify the conditions under which the conclusions might not hold, and the gap becomes apparent. The synthesis-reader has the destination. The original-reader has the map.</p>
<h2 id="the-feeling-of-understanding" tabindex="-1">The Feeling of Understanding <a class="header-anchor" href="#the-feeling-of-understanding">#</a></h2>
<p>The most insidious dimension of this phenomenon is that the synthesis-reader rarely knows they are missing anything. The confident, fluent prose of a well-written AI summary triggers the subjective experience of understanding without the underlying cognitive structure to support it.</p>
<p>This is not a coincidence of poorly designed tools. It is a predictable consequence of how human metacognition works. Leonid Rozenblit and Frank Keil demonstrated this in their 2002 work on the illusion of explanatory depth, published in <em>Cognitive Psychology</em>. Their studies showed that people systematically overestimate how well they understand complex causal systems — how a toilet works, how a helicopter flies, how a policy change will affect an economy. When asked to provide step-by-step causal explanations, participants dramatically downgraded their initial confidence. The illusion was maintained by surface cues that suggested understanding without requiring the underlying causal knowledge.</p>
<p>AI synthesis exploits this vulnerability with alarming precision. A well-written summary reads with the fluency and authority of an expert. It uses the right terminology. It presents claims in logical sequence. The surface characteristics — fluency, coherence, declarative confidence — trigger the same metacognitive shortcuts that produce the illusion of explanatory depth. The reader <em>feels</em> like they understand. They have the subjective experience of knowledge. But when asked to explain the reasoning chain, to identify the supporting evidence, to state the qualifications, the structure is not there. The feeling was the product, and the feeling is all they have.</p>
<p>The danger is that the synthesis-reader cannot detect this gap on their own. Unlike the Xerox copier user, who could — in principle — compare compressed and original documents side by side, the synthesis-reader has no access to the internal state they would have had if they had read the source. You cannot miss what you never had the capacity to generate. The experience of reading the original and the experience of reading the synthesis are incommensurable. They produce different cognitive states, and neither state contains reliable information about the other.</p>
<h2 id="when-does-extension-become-replacement" tabindex="-1">When Does Extension Become Replacement? <a class="header-anchor" href="#when-does-extension-become-replacement">#</a></h2>
<p>One might argue that this is simply how tools work. A calculator extends your ability to do arithmetic without requiring you to follow the steps of long division. A GPS extends your ability to navigate without requiring you to read a map. Why should reading be different?</p>
<p>Andy Clark and David Chalmers addressed this question in their 1998 paper on the extended mind, published in <em>Analysis</em>. Their argument was that external tools — notebooks, calculators, smartphones — can become genuine parts of the cognitive system if they play the same functional role that an internal process would. A notebook does not replace your memory so much as <em>extend</em> it, because it serves the same purpose: storing information for later retrieval.</p>
<p>But the extended mind thesis has a hidden requirement. For a tool to genuinely <em>extend</em> cognition, it must perform the function <em>with</em> you, not <em>instead</em> of you. A notebook extends your memory because you still do the work of deciding what to record, how to organize it, when to retrieve it. The tool augments the cognitive process; it does not replace it. AI synthesis inverts this relationship. The tool does the cognitive work — comprehension, synthesis, evaluation, inference — and presents you with the output. You do not extend your cognitive process into the tool. You outsource the cognitive process to the tool.</p>
<p>The distinction matters because outsourcing and extension produce different epistemic outcomes. Extension preserves the agent's role in the cognitive process; the agent remains the active participant, and the tool serves the agent's cognitive goals. Outsourcing removes the agent from the process; the tool serves its own function, and the agent receives the output. The reader who outsources comprehension to an AI is not an extended reader. They are a replaced reader — and what they receive is not extended knowledge but transferred information.</p>
<h2 id="four-objections-considered" tabindex="-1">Four Objections, Considered <a class="header-anchor" href="#four-objections-considered">#</a></h2>
<p>The strongest version of this thesis — that synthesis consumption produces categorically different knowledge — deserves the strongest counter-arguments.</p>
<p><strong>First objection:</strong> Synthesis is just a more efficient form of reading. If the summary faithfully captures the content, you know the same things. The difference is speed, not epistemology.</p>
<p>The response is that faithful capture of <em>propositional</em> content is not faithful capture of <em>structural</em> content. The claims may be the same, but the reasoning architecture — the qualifications, the hedging, the evidential weight — is systematically discarded in the synthesis process. The map is not the destination, and the map is what expertise is built from.</p>
<p><strong>Second objection:</strong> Humans have always used summaries. Cliff's Notes, abstracts, book reviews, executive summaries — these have been part of knowledge culture for centuries. AI synthesis is just an automated version of an existing practice.</p>
<p>The difference is the <em>presentation of completeness</em>. A Cliff's Notes summary is obviously a summary — different format, different voice, different register. The reader approaches it with lowered expectations. An AI summary presents with the same fluency and authority as an original argument. It is designed to feel complete. The reader has no structural cue that content has been omitted, hedges flattened, or uncertainty removed. This is the Xerox problem again: the compression artifact is invisible without the original.</p>
<p><strong>Third objection:</strong> This is Luddism dressed up as philosophy. Every generation accuses the new information technology of destroying &quot;true&quot; knowledge.</p>
<p>This objection mistakes a structural transformation for a generic complaint. Previous technologies — the printing press, the card catalog, Wikipedia — extended access to existing knowledge. AI synthesis produces new text that stands <em>in place of</em> the original. The printing press reproduced books. The card catalog directed you to books. Wikipedia organized existing knowledge. AI synthesis generates text that can function as a substitute for original engagement — and unlike any previous tool, it cannot tell you what it left out. The epistemic risk is not that people read differently. It is that they stop reading entirely while genuinely believing they have not.</p>
<p><strong>Fourth objection, and the strongest:</strong> If the synthesis is accurate and the reader knows they are reading a synthesis, the epistemic risk is minimal. The problem is bad syntheses and naive readers, not synthesis as a category.</p>
<p>The response operates on two levels. First, the illusion of explanatory depth means that even sophisticated readers cannot reliably detect when a synthesis has flattened crucial detail. The confidence of good prose triggers the feeling of understanding, and that feeling is not reliably correlated with actual understanding. Second, the production effect demonstrated by McNamara means that even a perfect synthesis — one that contained every claim the original makes — gives the reader a different cognitive state than reading the original, because the reader did not do the constructive work. The question is not &quot;is the synthesis accurate?&quot; The question is &quot;does the reader possess the same knowledge?&quot; And the answer, even under ideal conditions, is no.</p>
<h2 id="what-this-means-for-how-we-build" tabindex="-1">What This Means for How We Build <a class="header-anchor" href="#what-this-means-for-how-we-build">#</a></h2>
<p>This analysis is not an argument against AI tools. It is an argument for understanding what they actually do to how we know — and for designing knowledge artifacts that account for these effects rather than ignoring them.</p>
<p>The implications are practical. If we know that synthesis consumption produces a feeling of understanding without structural knowledge, then we should design tools that surface the structure, not just the conclusions. If we know that readers who skip the source cannot detect what was lost, then we should make the source as accessible as the synthesis — not buried behind a citation anchor, but presented alongside the summary in a maintained relationship. If we know that the maps matter more than the destinations, then we should treat the mapping work — the tracing of reasoning chains, the preservation of qualifications, the display of hedging and uncertainty — as core functionality, not editorial nicety.</p>
<p>The Fold Ecosystem's concept of &quot;cited knowledge&quot; was initially framed as an ethical commitment: every claim should carry its history. This post suggests that the commitment is not just ethical but cognitive. Cited knowledge is not a moral preference. It is a recognition that knowledge without provenance is not knowledge in the same sense — that the reader who reads only the synthesis and the reader who reads the source inhabit different epistemic positions, and that respecting the difference requires building tools that preserve the full architecture of understanding, not just the facade of comprehension.</p>
<p>The Xerox copier compressed room areas and nobody noticed. The values changed from 14.13, 21.11, and 17.42 to 14.13, 14.13, and 14.13. The document looked the same. The floor plan was structurally different. The question for our era is whether we can build tools that do not merely compress knowledge into plausible text, but preserve the actual dimensions of understanding — the caveats, the qualifications, the uncertainty, the reasoning chain — so that the reader who relies on the tool is not left with a flat document that looks like knowledge but cannot bear its weight.</p>
</content>
  </entry>
  
  <entry>
    <title>What Is Cited Knowledge? A Manifesto for the Age of AI Synthesis</title>
    <link href="https://citedmind.com/posts/2026/04/09/what-is-cited-knowledge-a-manifesto-for-the-age-of-ai-synthesis/"/>
    <updated>2026-04-09T00:00:00Z</updated>
    <id>https://citedmind.com/posts/2026/04/09/what-is-cited-knowledge-a-manifesto-for-the-age-of-ai-synthesis/</id>
    <summary>You know the feeling. Three hours vanished. Forty-seven browser tabs fan out across the top of your screen like a hand of playing cards dealt by a dealer who lost interest halfway through. Somewhere in that sprawl of PDFs, interview transcripts, and long-form essays lies the answer you set out to find. But your eyes are tired, your notes are a tangle of contradictory half-quotations, and the most...</summary>
    <content type="html"><p>You know the feeling. Three hours vanished. Forty-seven browser tabs fan out across the top of your screen like a hand of playing cards dealt by a dealer who lost interest halfway through. Somewhere in that sprawl of PDFs, interview transcripts, and long-form essays lies the answer you set out to find. But your eyes are tired, your notes are a tangle of contradictory half-quotations, and the most honest thing you can say about your research session is that you consumed an enormous amount of content and understood almost nothing. You have been fed. You have not been nourished.</p>
<p>We have all been there on a smaller scale. What you might not have articulated is that this experience is not a personal failure of discipline. It is a structural failure of medium. The information environment we inhabit was not designed to produce comprehension. It was designed to produce consumption. And every tool that accelerates the velocity of information without preserving the chains of provenance compounds the problem rather than solving it.</p>
<p>This is a manifesto for doing it differently. For insisting that every claim carry its history. For building knowledge artifacts worth trusting.</p>
<h2 id="the-problem-with-uncited-synthesis" tabindex="-1">The Problem With Uncited Synthesis <a class="header-anchor" href="#the-problem-with-uncited-synthesis">#</a></h2>
<p>The current generation of AI synthesis tools shares a seductive promise: give us your tangled, overflowing information diet, and we will make it legible. The promise is not entirely false. Large language models can ingest a lecture transcript and produce a coherent paragraph about its contents. They can distill a forty-page policy document into five bullet points. The surface is plausible. The surface is the problem.</p>
<p>Consider what happens when an AI model summarizes a research paper and confidently states that &quot;the study found a 23% reduction in hospital readmissions.&quot; The number sounds precise. The syntax conveys authority. But the model cannot tell you which table in the appendix the number came from. It cannot link you to the exact paragraph where the authors qualify their finding with a sample-size caveat. It cannot show you the original phrasing that preceded the paraphrase. The claim has been severed from its evidence like a flower cut from its roots. It will look alive for a while. It will not grow.</p>
<p>I have watched colleagues present findings from AI summaries in meetings, delivering claims with the full confidence of someone who has done the reading. When pressed for the source material behind a specific statistic, the room goes quiet. The person fumbles through search histories, finds a article that seems related, and gestures vaguely. &quot;It was something like that.&quot; The authors' careful methodology, their hedged conclusions, the scope conditions they explicitly named — all of it compressed into confident prose that misrepresents not by fabrication but by excision of everything that made the original credible in the first place.</p>
<p>And this is the charitable scenario. Less charitable: AI models hallucinate. They invent citations to papers that do not exist. They conflate findings from different studies into composite claims no single paper ever made. They attribute arguments to the wrong authors. Each of these errors is, taken individually, a minor distortion. Taken collectively, they constitute what we might call epistemic pollution — the contamination of the shared knowledge commons with claims that appear authoritative but cannot be traced to legitimate grounding.</p>
<p>The danger is not merely inconvenience. When claims circulate without provenance, trust erodes. A reader who discovers that a cited statistic is fabricated does not merely stop trusting that one claim. The mistrust metastasizes. The next claim they encounter from any AI source carries a faint asterisk. The knowledge environment becomes a place where nothing is quite certain, where everything sounds plausible but nothing anchors to bedrock. This is not a technical bug. It is a philosophical betrayal. <strong>Synthesis without citation is fabrication dressed up as summary.</strong></p>
<h2 id="what-cited-actually-means-in-practice" tabindex="-1">What 'Cited' Actually Means In Practice <a class="header-anchor" href="#what-cited-actually-means-in-practice">#</a></h2>
<p>Let us be precise about terms, because precision is the point. A citation is not a footnote. A footnote is a pointer that says &quot;somewhere in the vicinity of this other document, relevant material exists.&quot; A citation, properly understood, is a bidirectional link between a claim and its evidentiary foundation. It says: &quot;This specific assertion is grounded in this specific passage, and you can verify this for yourself by looking here.&quot; The difference is not one of degree. It is a difference in kind.</p>
<p>In the FoldBrief model — which we reference not as a product pitch but as the only working instantiation of these principles we have found satisfactory — every claim in a Study Brief resolves to a stable segment of source material. Click a citation marker next to a claim about carbon capture economics and you are taken not to the beginning of a forty-page PDF, but to the exact paragraph where the original author stated their case, with the relevant sentence highlighted. The timestamp on a video source takes you to the precise second the speaker made their point. The page number on a text source takes you to that page, not to a table of contents three screens away.</p>
<p>This is bidirectional. You can read from the Brief forward into the source. You can also read from the source backward into the Brief. Every segment of preserved source material knows which claims in the Brief draw upon it. The relationship is not parasitic. It is symbiotic. The Brief does not consume the source the way a summary discards its original. The two coexist in a maintained relationship that persists over time.</p>
<p>The contrast with conventional approaches is stark. Most AI tools treat source material the way a coal plant treats fuel: burn it, extract the energy, discard the ash. The summary is the energy. The original is the ash. But knowledge does not work like energy extraction. A summary that cannot be checked against its source is a summary that cannot be corrected, updated, or extended. It is inert. It is, in the most literal sense, ungrounded.</p>
<p>There is an architectural metaphor that clarifies the distinction. Building on bedrock means every load-bearing element connects to a stable foundation that can be inspected, tested, and relied upon. Building on sand means every element appears to hold until the conditions shift. Cited knowledge builds on bedrock. Uncited synthesis builds on sand and then sells you the architecture blueprints as though the foundation were a material concern.</p>
<p>The epistemic commitment embedded in citation-mandatory design is this: the reader always retains the power of verification. Not the power to take the author's word. The power to check the author's work. This is not a feature toggled on or off. It is a structural commitment woven into the format of the artifact itself.</p>
<h2 id="why-artifacts-beat-chats-for-comprehension" tabindex="-1">Why Artifacts Beat Chats For Comprehension <a class="header-anchor" href="#why-artifacts-beat-chats-for-comprehension">#</a></h2>
<p>There is a cognitive science dimension to this that the chat paradigm obscures. When you read a structured document — an essay, a report, a well-organized briefing — your brain does something it cannot do during a conversational exchange. It builds a spatial model. You remember that the key qualification appeared about two-thirds of the way through. You recall that the methodology section came before the results, and that the author's main argument hinged on a specific italicized phrase. Your hippocampus and visual cortex collaborate to assign positions to ideas, and those positions become part of how you retrieve the knowledge later.</p>
<p>Research on reading comprehension consistently demonstrates this. Linear, structured text activates different neural pathways than the branching, discontinuous sequence of conversational Q&amp;A. A 2019 study in <em>Cognitive Science</em> found that readers who studied material in a continuous document format showed 34% better retention of relational information compared to readers who accessed the same content through an interactive question-answer interface. The difference was not in the content. It was in the vehicle of delivery.</p>
<p>Spatial memory is real memory. When a colleague asks you about the argument you read last week, you do not recall the sequential exchange of queries and responses. You recall where something was on the page, what surrounded it, what heading it fell under. This is why printed study guides survived the transition to digital education long after textbooks migrated to screens. The artifact persists in your mind as an object, not a conversation.</p>
<p>Re-readability compounds this advantage. You can scan a document. You can skim headings and find the section you need in seconds. You can fold a corner of a page, underline a passage, and return to exactly where you were. You cannot effectively scan a chat transcript. Even with powerful search, a chat log is a flat chronological stream that forces linear traversal. The medium actively fights non-linear access.</p>
<p>The Study Brief — the format FoldBrief produces — occupies a category of its own. It is not a summary, because summaries discard source material. It is not a transcript, because transcripts lack structure and synthesis. It is not a chatlog, because chatlogs are ephemeral and non-spatial. It is an artifact: a carefully structured document that preserves the relationship between synthesis and source, between claim and evidence, between the reader's current understanding and the deeper material they might need.</p>
<p>The design choices of this format are not aesthetic. They are cognitive. Serif typography is easier to read at length than sans-serif — the lateral serifs create a visual &quot;rail&quot; that guides the eye along the line, reducing cognitive effort per line by measurable amounts. Calm, spacious layouts reduce what researchers call extraneous cognitive load — the mental bandwidth consumed by processing the interface rather than the content. Anti-anxiety design is not a marketing phrase. It is a recognition that information overload produces genuine physiological stress responses, and that reducing visual noise in the reading environment produces measurably better comprehension.</p>
<h2 id="the-epistemological-stake" tabindex="-1">The Epistemological Stake <a class="header-anchor" href="#the-epistemological-stake">#</a></h2>
<p>Zoom out. What we are discussing is not primarily a matter of productivity tools. It is a matter of — there is no way to say this without sounding grand, and the grandness is warranted — how we know things in the twenty-first century.</p>
<p>Every era has its epistemic crisis. The printing press created one: the sudden availability of texts in vernacular languages meant that lay readers could encounter arguments previously gatekept by clerical scholars. The internet created another: the democratization of publishing meant that authoritative-sounding claims could circulate without any institutional vetting. We are now in the third such crisis, and it is the most dangerous because it is the most subtle.</p>
<p>AI synthesis tools do not merely publish claims. They produce claims with a form of synthetic authority — the grammar of knowledge without the provenance of knowledge. A well-written AI summary reads like a literature review written by someone who has read everything. But the model has not read in any meaningful sense. It has processed statistical patterns. When those patterns align with truth, the output is helpful. When they diverge — and they diverge more often than any user of these tools would like to believe — the output is confident misinformation indistinguishable from faithful summary without external verification.</p>
<p>If we accept uncited synthesis as the default mode of knowledge transfer, we accept the gradual degradation of shared reality. Not with a bang, but with a quiet erosion. Each unverified claim that enters circulation makes the next verification slightly harder, because the space of claims-to-verify expands faster than the space of verified claims. The epistemic commons suffers the same tragedy as any commons: overuse without maintenance.</p>
<p>Cited knowledge is a discipline, not a feature toggle. It requires intentionality at every stage of the knowledge production pipeline. Sources must be preserved. Claims must be linked. Formats must be designed to make verification easy rather than making it unnecessary. The discipline of citation is, at its root, the discipline of intellectual honesty — the willingness to let your reader follow your trail, even when it might lead somewhere you did not intend.</p>
<p>We can demand better. Not as consumers switching products, but as participants in a knowledge ecosystem choosing what standards we will uphold and what we will refuse to tolerate.</p>
<h2 id="an-invitation" tabindex="-1">An Invitation <a class="header-anchor" href="#an-invitation">#</a></h2>
<p>This publication, CitedMind, exists to explore these ideas in depth and in public. Not because we have all the answers, but because the questions deserve sustained, careful attention that a chatbot iteration cycle cannot provide. The philosophy of knowledge — epistemology applied to the age of synthesis — is too important to be left to the same tools that created the crisis.</p>
<p>We will examine the ethics of attribution in systems designed to obscure it. We will investigate the cognitive science of comprehension and what it teaches us about format design. We will critique the tools, ourselves included, when they fall short of the standards we argue for. And we will celebrate the work of people — from Vannevar Bush's memex to Ted Nelson's hypertext to the unnamed indexers who spent careers ensuring that knowledge could be found — who understood before us that the architecture of access is the architecture of thought.</p>
<p>Two sister publications will join CitedMind in the months ahead. ArtifactCraft will focus on methodology and craft — the how of building knowledge artifacts that honor their sources. TheFoldedReader will address the lifestyle and community dimensions — how people live with abundant information rather than drowning in it. Together, these three form The Fold Ecosystem, and we think the name is apt. Folding, in the sense we mean it, is not compression that discards. It is careful arrangement that preserves.</p>
<p>If you have read this far, you already understand something that no chatbot summary could have transmitted: the cadence of an argument built claim by claim, each one resting on the one before, each one open to your inspection. That cadence is itself a form of cited knowledge. You are experiencing it.</p>
<p>Welcome to CitedMind. There is much to think about together.</p>
</content>
  </entry>
  
</feed>