{"id":"2c357ca6-cf6e-4240-ae39-46b78d07c2ce","title":"Rewriting AI Article: Critics and Proponents\n","created_at":"2025-03-06T10:24:40Z","saved":true,"shared":true,"branch_id":"00000000-0000-4000-0000-000000000000","profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"tag_ids":[],"read_only":true}
[{"id":"8044aa6e-c270-42dd-9fef-a360098ddbac","created_at":"2025-03-06T10:24:40Z","state":"done","prompt":"I wrote a draft of an article on why I believe AI critics should stop calling AI \"useless\". Here's the current version:\n\nTitle: Stop Calling GenAI \"Useless\"\nSubtitle: There are many valid criticisms of GenAI / LLM technology. \"They are useless\" is not one of them, and burns bridges.\n\nA common shorthand I am seeing in criticisms of Generative AI technologies is that GenAI / LLMs are “useless”. This shows up again and again, especially in hot takes on Mastodon, and sometimes from people whose professional work I deeply respect. While I understand that, to some, it might be a shorthand for a more thought-out criticism, this argument still annoys me, and I think it hurts the credibility of people who fundamentally want to do something good: ask if a disruptive new technology that has big social, legal, and also environmental impact is actually “good”, “worth it”, or whatever else you want to call it. This is my case for why we should stop using this argument, and talk about the (very real!) issues with GenAI differently.\n\nIn this article, am going to use the shorthand “AI” as a stand-in for Generative AI / LLMs. I am aware of both the discussions whether these are “real” AI, and of the fact that (Gen)AI is more than LLMs. Let’s shake hands on using this definition for the purpose of this article, knowing its limitations, and not argue about semantics in the comments, alright?\n\n## The Usefulness of AI\n\nWhat makes AI useless or useful? (What makes anything useless or useful?)\n\nA very simple definition could be: a technology is useful if it solves a problem for its user. That problem may be small (“please proofread this essay for me”) or life-changing (“ChatGPT diagnosed my rare disease” is basically a whole genre now, even though the effectiveness is still hit or miss). The goal of the user might be ethical or unethical, prosocial or antisocial, or somewhere in between on the spectrum - what matters is that someone wants the problem solved, and AI is helping that person solve it. If it is successful, in the eyes of the user, then it is useful to them.\n\nAI is useful to me, specifically. I find myself using it to aid in restructuring or rewriting my texts, not by telling it “rewrite this for me”, but by asking “please review this text and highlight points where the arguments are unclear, the storyline doesn’t make sense, or the general style could be strengthened\". And the results are good! Just like when using a human reviewer, you use 20% of what they tell you directly, adapt 60%, and ignore the remaining 20%, and the review cycles are measured in seconds, not days. This is useful.\n\nI also use it to learn about broad concepts, have it explain specific things to me in terms that I can understand more easily, write short bash scripts that are ugly but only need to work once, and many other things. And I occasionally use services like Perplexity to have them summarize the state of research on specific, niche topics that I am not an expert in. Does it get things wrong or make them up? Occasionally, yes, but that is not the point. The point is that getting a 80% correct explanation of the current state of research on some thing that I know very little about is more useful to me than spending three hours attempting to decipher the scientific papers this explanation was pulled from, and I can then go out and verify the tidbits that are really important in my situation and that I absolutely need to get right.\n\nJust today, I assisted a bike driver who had been hit by a delivery bike and fallen on her unprotected head (seriously, people, wear helmets!). I used an AI to quickly pull up a list of symptoms for concussion to check with her. This gave me the information in half the time it would have taken me to scroll past the ads on Google, click on the first link, wait for it to load, dismiss the cookie banner and popup ad, and get a less well formatted and potentially less complete version of this information (that, let’s be real, may have also been written by AI). This is useful.\n\n## Why Is This Hurting Your Argument?\n\nLet’s assume that you actually want to convince people with your argument against AI, and aren’t just in it for the memes and hot takes. We have established that to the people you want to convince and bring to your side, AI is useful. You may not think that the use that they are getting from it is good (one persons “effective content marketing machine” is anothers “AI slop generator poisoning the Internet”), but if you want to convince them, you need to meet them where they are and start with a shared reality / a shared set of assumptions to base a productive discussion on. By saying that AI is useless, you are signaling that this shared reality does not exist on a foundational level, and are hurting the credibility of your other arguments.\n\nYou’ve probably been on the other side of this kind of argument yourself. If you are an AI sceptic, chances are you have a high affinity for technology and are probably hanging out with people from hackerspaces, or gamers, or other technically-inclined subcultures. (There are other groups of people opposed to AI, like artists, but let me use these as an example, as this is where I have my own roots). Do you remember when people were saying things like these?\n\n“Why would anyone need more than ISDN speeds at home?”\n\n“Oh yes, we thought about digitizing these forms, but making you send them by fax / letter is a lot easier.”\n\n“What the hell do you need mobile internet for?”\n\n“Well, this whole ‘Internet’ thing is never going to catch on, why are you wasting your time with it? Why not learn mainframe programming?” (okay, that probably hasn’t been said in a few years, but you get the idea)\n\nDid sentences like this make you trust the judgement of the person delivering it? Would you have been as open as before to receiving potentially more valid arguments from them after that, or would you have disqualified them as cranks who didn’t know what they were talking about? I know I always had a hard time taking people seriously after sentences like that.\n\nPeople on all levels, from school children to CEOs, have made the experience that AI can help them solve their problems. Some people see AI as a technological revolution that is “on the small end, at least as large as the Internet”. These are fundamentally serious people who have to convince other fundamentally serious people that spending hundreds of billion dollars on this bet is the right call, and they are succeeding with these arguments. Telling them that AI is useless is so far outside of their perceived reality that they will immediately stop listening to anything you are saying and ignore you as a crank, luddite, or whatever other term they choose to use.\n\nNow, is it possible that AI is a bubble that will pop? In my eyes, it is not only possible but inevitable. Many new technologies have a bubble phase. The Dot-Com-Bubble also had a telco-bubble attached to it that popped hard after significant initial overinvestment in the buildout of connectivity, leading to massive losses for the affected telcos. AI evangelists seem to be split on whether the AI bubble will pop or whether the demand for “artificial cognition” (their words) will increase so much that the currently planned buildout will be insufficient, but let’s be clear: if it pops, they will be broadly OK with that and count it as the cost of doing business and rolling out a disruptive new technology. They are writing books exhorting the power of bubble dynamics in driving change (and making a few good arguments!). And, let’s be honest: if you could set a couple billion dollars of private equity funds on fire to roll out a technology that significantly changes the world for the better a couple years earlier, wouldn’t you?\n\n## Distinguishing Between Poor Implementations and Core Utility\n\nWe've all seen the awkwardly integrated AI assistants that companies have bolted onto existing products to please shareholders. Similarly, the flood of “X, but with AI” startups often deliver little value beyond buzzword compliance and setting venture capital on fire. These implementations can indeed be useless or even counterproductive, and it is only correct to call this out.\n\nThis pattern isn't unique to AI. We saw similar dynamics with blockchain (a hype cycle where I am far more open to the argument of “fundamentally useless except for crime and destroying the planet”, but I digress), IoT, and countless other technology waves. Poor implementations and hype-driven products deserve criticism, and pointing them out is both valid and necessary. And, yes, it can be funny and even cathartic to point and laugh when they implode.\n\nHowever, the failure of these specific implementations doesn't invalidate the core utility of the underlying technology. Just as the Dot-Com bubble's burst didn't mean e-commerce was fundamentally a bad idea, the inevitable collapse of many AI ventures won't mean the technology itself lacks utility.\n\n## The Right Criticisms Of AI?\n\nSo, what should we argue about, then? In my view, there’s no lack of easy targets or hard questions to be had here. The elephant in the room is the environmental impact of planning to spend double-digit percentage points of the national power grid capacity on AI computations. The social impact of replacing (skilled or unskilled) labor with computers, the impact on the effectiveness of companies doing this too early and making expensive mistakes due to AI not being suited for these purposes, the impact on IT Security and maintainability of having AI write your code for you, the impact on education when people can just generate a complete essay, … there are lots and lots of criticisms to choose from. Pick one. Hell, pick all of them. Go to town.\n\nOr go deeper on where your feeling of “AI is useless” comes from. Is it that you expect it to over-promise and under-deliver? Is it the ecosystem of grifters looking for easy money around it? Is it the fact that the system will just make things up if it runs out of ideas? Or have you tried that specific feature someone else is touting and were disappointed? Why?\n\nThese are problems that we need to address, and many of these problems are also seen as problems by AI proponents, and are being actively worked on - because these people aren’t dumb or evil. They see the technology through other eyes, they weigh the importance of different factors differently, or they have different expectations of the effects of future models on all of these issues. This isn’t a cabal of grifters out to steal money from Hard-Working Americans™, but many of them are people who believe in what they are doing.\n\nPersonally, I find reading the articles and listening to the podcasts of people who aren’t AI evangelists but are in the pro-AI camp quite helpful to update my mental models of what the “other side” is thinking. Stuff like the Complex Systems podcast with Patrick McKenzie on boom and bust cycles, or on AI and power economics (NB: I don’t consider this an overall fantastic podcast, but find the episodes interesting for learning what a proud capitalist with some libertarian leanings thinks about these issues, even if I don’t share these views). Similarly, reading what the founder of an AI lab sees as the potential upside of AI can be instructive, even if you don’t agree with them. I’m sure there are more good sources that I can’t remember off the top of my head (feel free to put them in the comments below or send them to me on Mastodon and I will add them here).\n\nHaving these discussions is important. However, having an agreed common reality is a prerequisite for that. Calling AI categorically useless sabotages this common reality, which will help no one. If we want to be heard, we have to meet people half-way, and acknowledging that AI can be a tool to solve some problems, while sometimes disagreeing on whether these problems should be solved or are solved well, can be a more fruitful base for discussions. Just because something is useful does not mean that you endorse it. It just means that you have to work a little harder and dig a little deeper in your criticism - and that will make it a better discussion for everyone.\n\n--- \nI am thinking about a rewrite where I am going a bit broader on the argument, and instead say that AI critics should at least try to understand what AI proponents are seeing in the technology. Do you see a way of doing this without scrapping more or less the entire draft and writing a completely new article?","reply":"<h1>Feedback on Your Draft: Expanding Without Starting Over</h1>\n<p>Your article makes a compelling case against dismissing AI as "useless" while acknowledging legitimate concerns. I think you can broaden your argument to include understanding AI proponents' perspectives without scrapping your draft. Here's how you might approach this:</p>\n<h2>Suggested Modifications</h2>\n<h3>1. Expand your introduction</h3>\n<p>Add a paragraph after your introduction that explicitly states your expanded thesis:</p>\n<p>"Beyond avoiding dismissive arguments like 'AI is useless,' I believe critics would benefit from genuinely trying to understand what AI proponents see in this technology. This doesn't require abandoning critical perspectives, but rather developing a more nuanced understanding of both the technology's capabilities and the motivations of those championing it."</p>\n<h3>2. Add a new section after "The Usefulness of AI"</h3>\n<div class=\"codehilite\"><span class=\"filename\">Text Only</span><pre><span></span><code>## Understanding the AI Optimist's Perspective\n\nWhile I've outlined how AI is useful to me personally, it's worth exploring why AI proponents are so enthusiastic about this technology. Many see potential far beyond the current applications:\n\n- **Productivity amplification**: Proponents see AI as a tool that can dramatically increase human productivity across domains, from coding to creative work.\n\n- **Democratization of capabilities**: They envision AI making specialized skills more accessible to people without formal training.\n\n- **Problem-solving at scale**: Many believe AI can help address complex challenges like climate modeling, drug discovery, and scientific research.\n\n- **Economic transformation**: Some see AI as enabling new business models and economic opportunities that weren't previously possible.\n\nUnderstanding these perspectives doesn't require agreeing with them. However, engaging with the strongest versions of pro-AI arguments rather than caricatures allows for more productive dialogue about the technology's future.\n</code></pre></div>\n<h3>3. Modify your "Right Criticisms" section</h3>\n<p>Add a paragraph about how understanding proponents' views helps formulate better criticisms:</p>\n<p>"When we understand what AI proponents actually value and believe about the technology, we can craft criticisms that address their actual claims rather than strawman arguments. This makes our critiques more persuasive and harder to dismiss. For instance, if proponents value AI for its productivity benefits, we can ask pointed questions about whether those benefits are distributed equitably or whether they come at the cost of job displacement."</p>\n<h3>4. Add a new concluding paragraph</h3>\n<p>End with a stronger call for mutual understanding:</p>\n<p>"The debate around AI isn't simply between 'pro' and 'anti' camps. It's a complex conversation about how we want to integrate powerful new tools into our society. By acknowledging AI's utility while critically examining its impacts, and by genuinely trying to understand the perspectives of those who see tremendous promise in this technology, we can have the nuanced discussions needed to shape AI's development in beneficial directions."</p>\n<h2>Why This Approach Works</h2>\n<p>This approach preserves your core argument about not dismissing AI as "useless" while expanding to include the importance of understanding proponents' perspectives. It maintains your existing structure and examples while adding new dimensions to your analysis.</p>\n<p>The additions focus on:</p>\n<ol>\n<li>Explicitly stating your expanded thesis</li>\n<li>Exploring proponents' perspectives</li>\n<li>Showing how understanding these views strengthens criticism</li>\n<li>Concluding with a call for nuanced dialogue</li>\n</ol>\n<p>This approach avoids starting from scratch while significantly broadening your argument in the direction you indicated.</p>","md":"# Feedback on Your Draft: Expanding Without Starting Over\n\nYour article makes a compelling case against dismissing AI as \"useless\" while acknowledging legitimate concerns. I think you can broaden your argument to include understanding AI proponents' perspectives without scrapping your draft. Here's how you might approach this:\n\n## Suggested Modifications\n\n### 1. Expand your introduction\n\nAdd a paragraph after your introduction that explicitly states your expanded thesis:\n\n\"Beyond avoiding dismissive arguments like 'AI is useless,' I believe critics would benefit from genuinely trying to understand what AI proponents see in this technology. This doesn't require abandoning critical perspectives, but rather developing a more nuanced understanding of both the technology's capabilities and the motivations of those championing it.\"\n\n### 2. Add a new section after \"The Usefulness of AI\"\n\n```\n## Understanding the AI Optimist's Perspective\n\nWhile I've outlined how AI is useful to me personally, it's worth exploring why AI proponents are so enthusiastic about this technology. Many see potential far beyond the current applications:\n\n- **Productivity amplification**: Proponents see AI as a tool that can dramatically increase human productivity across domains, from coding to creative work.\n\n- **Democratization of capabilities**: They envision AI making specialized skills more accessible to people without formal training.\n\n- **Problem-solving at scale**: Many believe AI can help address complex challenges like climate modeling, drug discovery, and scientific research.\n\n- **Economic transformation**: Some see AI as enabling new business models and economic opportunities that weren't previously possible.\n\nUnderstanding these perspectives doesn't require agreeing with them. However, engaging with the strongest versions of pro-AI arguments rather than caricatures allows for more productive dialogue about the technology's future.\n```\n\n### 3. Modify your \"Right Criticisms\" section\n\nAdd a paragraph about how understanding proponents' views helps formulate better criticisms:\n\n\"When we understand what AI proponents actually value and believe about the technology, we can craft criticisms that address their actual claims rather than strawman arguments. This makes our critiques more persuasive and harder to dismiss. For instance, if proponents value AI for its productivity benefits, we can ask pointed questions about whether those benefits are distributed equitably or whether they come at the cost of job displacement.\"\n\n### 4. Add a new concluding paragraph\n\nEnd with a stronger call for mutual understanding:\n\n\"The debate around AI isn't simply between 'pro' and 'anti' camps. It's a complex conversation about how we want to integrate powerful new tools into our society. By acknowledging AI's utility while critically examining its impacts, and by genuinely trying to understand the perspectives of those who see tremendous promise in this technology, we can have the nuanced discussions needed to shape AI's development in beneficial directions.\"\n\n## Why This Approach Works\n\nThis approach preserves your core argument about not dismissing AI as \"useless\" while expanding to include the importance of understanding proponents' perspectives. It maintains your existing structure and examples while adding new dimensions to your analysis.\n\nThe additions focus on:\n1. Explicitly stating your expanded thesis\n2. Exploring proponents' perspectives\n3. Showing how understanding these views strengthens criticism\n4. Concluding with a call for nuanced dialogue\n\nThis approach avoids starting from scratch while significantly broadening your argument in the direction you indicated.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"47","tokens":"4,651","cost":"0.03","total_time_second":15.62},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]}]