{"id":"3d26130f-a7fe-4f63-b4c5-8119159692d7","title":"Article Feedback: AI Criticism and Scepticism","created_at":"2025-03-05T19:50:55Z","saved":true,"shared":true,"branch_id":"00000000-0000-4000-0000-000000000000","profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"tag_ids":[],"read_only":true}
[{"id":"6b8bd9fa-7a96-475d-9c29-88ea7ba95c7d","created_at":"2025-03-05T19:50:55Z","state":"done","prompt":"I am writing an article on why people should stop calling AI / LLMs useless when criticizing them. I would like your feedback on the flow of the argument, the storyline, the style, and any other things you deem relevant. My target audience are AI sceptics, who are assumed to be tech-literate but not intimitely familiar with the positions of AI proponents.\n\nThe article follows below. Formatting and hyperlinks have not made the transition to this editor, sorry.\n\n# Stop Calling AI \"Useless\"\nSubtitle: There are many valid criticisms of AI / LLM technology. \"They are useless\" is not one of them, and burns credibility.\n\nA common shorthand I am seeing in criticisms of AI technologies is that AI / LLMs are “useless”. This shows up again and again, especially in hot takes on Mastodon, and sometimes from people whose professional work I deeply respect. This argument annoys me, and I think it hurts the credibility of people who fundamentally want to do something good: ask if a disruptive new technology that has big social, legal, and also environmental impact is actually “good”, “worth it”, or whatever else you want to call it. This is my case for why we should stop using this argument and talk about the (very real!) issues with AI differently.\n\nSide note: I am going to use AI mostly as a stand-in for LLMs in this article. I am aware of both the discussions whether LLMs are “real” AI, and of the fact that AI is more than LLMs. Let’s shake hands on using this definition for the purpose of this article, knowing its limitations, and not argue about semantics in the comments, alright?\n\n## The Usefulness of AI\n\nWhat makes AI useless or useful? (What makes anything useless or useful?)\n\nA very simple definition could be: a technology is useful if it solves a problem for its user. That problem may be small (“please proofread this essay for me”) or life-changing (“ChatGPT diagnosed my rare disease” is basically a whole genre now, even though the effectiveness is still hit or miss). The goal of the user might be ethical or unethical, prosocial or antisocial, or somewhere in between on the spectrum - what matters is that someone wants the problem solved, and AI is helping that person solve it. If it is successful, in the eyes of the user, then it is useful to them.\n\nAI is useful to people. That is a fact. You may not like this fact, or its implications, but it is true.\n\nAI is useful to me, specifically. I find myself using it to aid in restructuring or rewriting my texts, not by telling it “rewrite this for me”, but by asking “please review this text and highlight points where the arguments are unclear, the storyline doesn’t make sense, or the general style could be strengthened\". And the results are good! Just like when using a human reviewer, you use 20% of what they tell you directly, adapt 60%, and ignore the remaining 20%, and the review cycles are measured in seconds, not days. This is useful.\n\nI also use it to learn about broad concepts, have it explain specific things to me in terms that I can understand more easily, write short bash scripts that are ugly but only need to work once, and many other things. And I occasionally use services like Perplexity to have them summarize the state of research on specific, niche topics that I am not an expert in. Does it get things wrong? Yes, but that is not the point. The point is that getting a 80% correct explanation of the current state of research on some thing that I know very little about is more useful to me than spending three hours attempting to decipher the scientific papers this explanation was pulled from, and I can then go out and verify the tidbits that are really important in my situation and that absolutely need to be correct.\n\nJust today, I assisted a bike driver who had been hit by a delivery bike and fallen on her unprotected head (seriously, people, wear helmets!). I used an AI to quickly pull up a list of symptoms for concussion to check with her. This gave me the information in half the time it would have taken me to scroll past the ads on Google, click on the first link, wait for it to load, dismiss the cookie banner and popup ad, and get a less well formatted and potentially less complete version of this information (that, let’s be real, may have also been written by AI). This is useful.\n\n## Why Is This Hurting Your Argument?\n\nLet’s assume that you actually want to convince people with your argument against AI, and aren’t just in it for the memes and hot takes. We have established that to the people you want to convince and bring to your side, AI is useful. You may not think that the use that they are getting from it is good (one persons “effective content marketing machine” is anothers “AI slop generator poisoning the Internet”), but if you want to convince them, you need to meet them where they are and start with a shared reality / a shared set of assumptions to base a productive argument on. By saying that AI is useless, you are signaling that this shared reality does not exist on a foundational level, and are hurting the credibility of your other arguments.\n\nYou’ve probably been on the other side of this kind of argument yourself. If you are an AI sceptic, chances are you have a high affinity for technology and are probably hanging out with people from hackerspaces, or gamers, or other technically-inclined subcultures. (There are other groups of people opposed to AI, like artists, but let me use these as an example, as this is where I have my own roots). Do you remember when people were saying things like these?\n\n“Why would anyone need more than ISDN speeds at home?”\n\n“Oh yes, we thought about digitizing these forms, but making you send them by fax / letter is a lot easier.”\n\n“What the hell do you need mobile internet for?”\n\n“Well, this whole ‘Internet’ thing is never going to catch on, why are you wasting your time with it? Why not learn mainframe programming?” (okay, that probably hasn’t been said in a few years, but you get the idea)\n\nDid sentences like this make you trust the judgement of the person delivering it? Would you have been as open as before to receiving potentially more valid arguments from them after that, or would you have disqualified them as cranks who didn’t know what they were talking about? I know I always had a hard time taking people seriously after sentences like that.\n\nPeople on all levels, from school children to CEOs, have made the experience that AI can help them solve their problems. Some people see AI as a technological revolution that is “on the small end, at least as large as the Internet”. These are fundamentally serious people who have to convince other fundamentally serious people that spending hundreds of billion dollars on this bet is the right call, and they are succeeding with these arguments. Telling them that AI is useless is so far outside of their perceived reality that they will immediately stop listening to anything you are saying and ignore you as a crank, luddite, or whatever other term they choose to use.\n\nNow, is it possible that AI is a bubble that will pop? In my eyes, it is not only possible but inevitable. Many new technologies have a bubble phase. The Dot-Com-Bubble also had a telco-bubble attached to it that popped hard after significant initial overinvestment in the buildout of connectivity, leading to massive losses for the affected telcos. AI evangelists seem to be split on whether the AI bubble will pop or whether the demand for “artificial cognition” (their words) will increase so much that the current buildout will be insufficient, but let’s be clear: if it pops, they will be broadly OK with that and count it as the cost of doing business and rolling out a disruptive new technology. They are writing books exhorting the power of bubble dynamics in driving change (and making a few good arguments!). And, let’s be honest: if you could set a couple billion dollars of private equity funds on fire to roll out a technology that changes the world for the better a couple years earlier, wouldn’t you?\n\n## But This Specific Use of AI is Useless!\n\nNow, I can already see you point at the AI features being forced down your throats by corporations that thought they needed to ship “something with AI” to please their shareholders, or at the dumb startups of the pattern “X, but with AI” that will inevitably explode once the easy money runs out. We’ve seen the same with the Blockchain startups (a hype cycle where I am far more open to the argument of “fundamentally useless except for crime and destroying the planet”, but I digress). New hype cycles bring new companies that are just cashing in on the hype and will rise and fall on mostly-predictable patterns. Yes, it’s fun to laugh at them when they crash and burn, no, the VCs will probably never learn, news at eleven.\n\nBut let me ask you: did the bust of the Dot-Com-Bubble mean that eCommerce and the Internet is useless?\n\nI don’t think that anyone would argue that this is the case. Similarly, I believe that, in terms of pure usefulness and leaving all other factors aside, AI / LLMs has a useful core functionality that is genuinely a step up compared to what we had before, and that can be productively used.\n\n## The Right Criticisms Of AI?\n\nSo, what should we argue about, then? In my view, there’s no lack of easy targets or hard questions to be had here. The elephant in the room is the environmental impact of planning to spend double-digit percentage points of the national power grid capacity on AI computations. The social impact of replacing (skilled or unskilled) labor with computers, the impact on the effectiveness of companies doing this too early and making expensive mistakes due to AI not being suited for these purposes, the impact on IT Security and maintainability of having AI write your code for you, the impact on education when people can just generate a complete essay, … there are lots and lots of criticisms to choose from. Pick one. Hell, pick all of them. Go to town.\n\nThese are problems that we need to address, and many of these problems are also seen as problems by AI proponents, and are being actively worked on - because these people aren’t dumb or evil. They see the technology through other eyes, they weigh the importance of different factors differently, or they have different expectations of the effects of future models on all of these issues. This isn’t a cabal of grifters out to steal money from Hard-Working Americans™, but many of them are people who believe in what they are doing.\n\nPersonally, I find reading the articles and listening to the podcasts of people who aren’t AI evangelists but are in the pro-AI camp quite helpful to update my mental models of what the “other side” is thinking. Stuff like the Complex Systems podcast with Patrick McKenzie on boom and bust cycles, or on AI and power economics (NB: I don’t consider this an overall fantastic podcast, but find the episodes interesting for learning what a proud capitalist with some libertarian leanings thinks about these issues, even if I don’t share these views). Similarly, reading what the founder of an AI lab sees as the potential upside of AI can be instructive, even if you don’t agree with them. I’m sure there are more good sources that I can’t remember off the top of my head (feel free to put them in the comments below or send them to me on Mastodon and I will add them here).\n\nHaving these discussions is important. However, having an agreed common reality is a prerequisite for that. Calling AI categorically useless sabotages this common reality, which will help no one. If we want to be heard, we have to meet people half-way, and acknowledging that AI can be a tool to solve some problems, while sometimes disagreeing on whether these problems should be solved or are solved well, can be a more fruitful base for discussions.","reply":"<h1>Feedback on "Stop Calling AI 'Useless'" Article</h1>\n<p>Your article presents a well-structured and thoughtful argument against dismissing AI/LLMs as "useless." Here's my feedback on various aspects:</p>\n<h2>Strengths</h2>\n<p><strong>Flow and Structure</strong></p>\n<ul>\n<li>The article has a clear, logical progression from establishing your position to explaining why the "useless" argument is counterproductive.</li>\n<li>Your introduction effectively sets up the problem and your stance without being overly confrontational.</li>\n<li>The use of personal examples in the "Usefulness of AI" section provides concrete evidence that strengthens your argument.</li>\n</ul>\n<p><strong>Tone and Style</strong></p>\n<ul>\n<li>Your conversational tone works well for engaging skeptical readers without alienating them.</li>\n<li>The balance between personal anecdotes and broader arguments creates an accessible yet substantive piece.</li>\n<li>Your acknowledgment of valid criticisms demonstrates intellectual honesty that will help maintain credibility with your target audience.</li>\n</ul>\n<p><strong>Persuasive Elements</strong></p>\n<ul>\n<li>The comparison to historical tech skepticism ("Why would anyone need more than ISDN speeds?") is particularly effective - it helps readers recognize potential parallels to their own position.</li>\n<li>Your framing of "shared reality" as a prerequisite for productive discussion is compelling and well-articulated.</li>\n<li>The distinction between criticizing specific implementations versus dismissing the entire technology is an important nuance.</li>\n</ul>\n<h2>Suggestions for Improvement</h2>\n<p><strong>Strengthen the Counter-Arguments</strong></p>\n<ul>\n<li>Consider addressing more directly why some people perceive AI as "useless" - is it because of overhyped capabilities, ethical concerns reframed as utility concerns, or genuine disappointment with current limitations?</li>\n<li>You might acknowledge the gap between marketing promises and actual capabilities that fuels some of the "useless" rhetoric.</li>\n</ul>\n<p><strong>Expand on Alternatives</strong></p>\n<ul>\n<li>Your "Right Criticisms" section could benefit from more development - perhaps offer 1-2 examples of how to reframe a "useless" criticism into a more productive one.</li>\n<li>Consider adding a brief discussion of how to evaluate AI's usefulness against its costs (environmental, social, etc.) in a more nuanced way.</li>\n</ul>\n<p><strong>Audience Engagement</strong></p>\n<ul>\n<li>Since your target audience is tech-literate skeptics, you might include more acknowledgment of legitimate technical limitations of current AI systems while maintaining your overall argument.</li>\n<li>Consider addressing the concern that acknowledging AI's usefulness might be seen as endorsing its widespread adoption despite unresolved issues.</li>\n</ul>\n<p><strong>Stylistic Considerations</strong></p>\n<ul>\n<li>The side note about terminology could be shortened or moved to a footnote to maintain momentum in your introduction.</li>\n<li>The "But This Specific Use of AI is Useless!" section feels slightly defensive - consider reframing to maintain your collaborative tone.</li>\n</ul>\n<h2>Overall Assessment</h2>\n<p>Your article effectively makes the case that dismissing AI as "useless" undermines more legitimate criticisms. The personal examples and historical parallels create a persuasive narrative that should resonate with your target audience.</p>\n<p>The strongest aspect is your framing of the issue as one of establishing common ground for productive discussion rather than simply defending AI. This approach is more likely to engage skeptics who might otherwise dismiss a pro-AI piece.</p>\n<p>To strengthen the piece further, consider expanding slightly on the legitimate concerns and limitations of current AI systems while maintaining your core argument about the counterproductive nature of the "useless" criticism.</p>","md":"# Feedback on \"Stop Calling AI 'Useless'\" Article\n\nYour article presents a well-structured and thoughtful argument against dismissing AI/LLMs as \"useless.\" Here's my feedback on various aspects:\n\n## Strengths\n\n**Flow and Structure**\n- The article has a clear, logical progression from establishing your position to explaining why the \"useless\" argument is counterproductive.\n- Your introduction effectively sets up the problem and your stance without being overly confrontational.\n- The use of personal examples in the \"Usefulness of AI\" section provides concrete evidence that strengthens your argument.\n\n**Tone and Style**\n- Your conversational tone works well for engaging skeptical readers without alienating them.\n- The balance between personal anecdotes and broader arguments creates an accessible yet substantive piece.\n- Your acknowledgment of valid criticisms demonstrates intellectual honesty that will help maintain credibility with your target audience.\n\n**Persuasive Elements**\n- The comparison to historical tech skepticism (\"Why would anyone need more than ISDN speeds?\") is particularly effective - it helps readers recognize potential parallels to their own position.\n- Your framing of \"shared reality\" as a prerequisite for productive discussion is compelling and well-articulated.\n- The distinction between criticizing specific implementations versus dismissing the entire technology is an important nuance.\n\n## Suggestions for Improvement\n\n**Strengthen the Counter-Arguments**\n- Consider addressing more directly why some people perceive AI as \"useless\" - is it because of overhyped capabilities, ethical concerns reframed as utility concerns, or genuine disappointment with current limitations?\n- You might acknowledge the gap between marketing promises and actual capabilities that fuels some of the \"useless\" rhetoric.\n\n**Expand on Alternatives**\n- Your \"Right Criticisms\" section could benefit from more development - perhaps offer 1-2 examples of how to reframe a \"useless\" criticism into a more productive one.\n- Consider adding a brief discussion of how to evaluate AI's usefulness against its costs (environmental, social, etc.) in a more nuanced way.\n\n**Audience Engagement**\n- Since your target audience is tech-literate skeptics, you might include more acknowledgment of legitimate technical limitations of current AI systems while maintaining your overall argument.\n- Consider addressing the concern that acknowledging AI's usefulness might be seen as endorsing its widespread adoption despite unresolved issues.\n\n**Stylistic Considerations**\n- The side note about terminology could be shortened or moved to a footnote to maintain momentum in your introduction.\n- The \"But This Specific Use of AI is Useless!\" section feels slightly defensive - consider reframing to maintain your collaborative tone.\n\n## Overall Assessment\n\nYour article effectively makes the case that dismissing AI as \"useless\" undermines more legitimate criticisms. The personal examples and historical parallels create a persuasive narrative that should resonate with your target audience.\n\nThe strongest aspect is your framing of the issue as one of establishing common ground for productive discussion rather than simply defending AI. This approach is more likely to engage skeptics who might otherwise dismiss a pro-AI piece.\n\nTo strengthen the piece further, consider expanding slightly on the legitimate concerns and limitations of current AI systems while maintaining your core argument about the counterproductive nature of the \"useless\" criticism.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"48","tokens":"4,536","cost":"0.03","total_time_second":15.34},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]},{"id":"a9b3ac50-6356-4208-98cb-85c8c42f4236","created_at":"2025-03-05T20:00:28Z","state":"done","prompt":"Thank you, this was very helpful! \n\nI agree with your point on the \"But this specific use of AI is useless\" segment, but can't really think of a good way of rewriting it. Can you give me a few pointers on what you would change?","reply":"<h1>Reframing the "But This Specific Use of AI is Useless" Section</h1>\n<p>I'd suggest reframing this section to maintain your collaborative tone while acknowledging valid criticisms of specific AI implementations. Here are some pointers:</p>\n<h2>Suggested Approach</h2>\n<ol>\n<li>\n<p><strong>Shift from defensive to acknowledging</strong></p>\n<ul>\n<li>Start by validating the legitimate frustration with poor AI implementations rather than anticipating objections</li>\n<li>Position yourself as sharing some of these concerns rather than defending against them</li>\n</ul>\n</li>\n<li>\n<p><strong>Use a more neutral heading</strong></p>\n<ul>\n<li>Consider something like "Distinguishing Between Poor Implementations and Core Utility" or "When AI Features Miss the Mark"</li>\n</ul>\n</li>\n<li>\n<p><strong>Lead with agreement before contrast</strong></p>\n<ul>\n<li>Acknowledge specific examples of genuinely unhelpful AI implementations first</li>\n<li>Then transition to your point about not dismissing the entire technology</li>\n</ul>\n</li>\n<li>\n<p><strong>Use more inclusive language</strong></p>\n<ul>\n<li>Replace phrases like "I can already see you point at..." with more collaborative framing</li>\n<li>Use "we" language where appropriate to create common ground</li>\n</ul>\n</li>\n</ol>\n<h2>Example Rewrite Direction</h2>\n<p>Here's how you might begin the section:</p>\n<p>"Many AI features being rushed to market deserve criticism. We've all encountered the awkwardly integrated AI assistants that companies have bolted onto existing products to please shareholders. The flood of 'X, but with AI' startups often deliver little value beyond buzzword compliance. These implementations can indeed be useless or even counterproductive.</p>\n<p>This pattern isn't unique to AI. We saw similar dynamics with blockchain, IoT, and countless other technology waves. Poor implementations and hype-driven products deserve criticism, and pointing them out is both valid and necessary.</p>\n<p>However, the failure of these specific implementations doesn't invalidate the core utility of the underlying technology. Just as the Dot-Com bubble's burst didn't mean e-commerce was fundamentally worthless, the inevitable collapse of many AI ventures won't mean the technology itself lacks utility..."</p>\n<p>This approach acknowledges the valid criticisms while maintaining your core argument, creating a more collaborative tone that invites skeptics to engage with your broader point rather than feeling defensive.</p>","md":"# Reframing the \"But This Specific Use of AI is Useless\" Section\n\nI'd suggest reframing this section to maintain your collaborative tone while acknowledging valid criticisms of specific AI implementations. Here are some pointers:\n\n## Suggested Approach\n\n1. **Shift from defensive to acknowledging**\n - Start by validating the legitimate frustration with poor AI implementations rather than anticipating objections\n - Position yourself as sharing some of these concerns rather than defending against them\n\n2. **Use a more neutral heading**\n - Consider something like \"Distinguishing Between Poor Implementations and Core Utility\" or \"When AI Features Miss the Mark\"\n\n3. **Lead with agreement before contrast**\n - Acknowledge specific examples of genuinely unhelpful AI implementations first\n - Then transition to your point about not dismissing the entire technology\n\n4. **Use more inclusive language**\n - Replace phrases like \"I can already see you point at...\" with more collaborative framing\n - Use \"we\" language where appropriate to create common ground\n\n## Example Rewrite Direction\n\nHere's how you might begin the section:\n\n\"Many AI features being rushed to market deserve criticism. We've all encountered the awkwardly integrated AI assistants that companies have bolted onto existing products to please shareholders. The flood of 'X, but with AI' startups often deliver little value beyond buzzword compliance. These implementations can indeed be useless or even counterproductive.\n\nThis pattern isn't unique to AI. We saw similar dynamics with blockchain, IoT, and countless other technology waves. Poor implementations and hype-driven products deserve criticism, and pointing them out is both valid and necessary.\n\nHowever, the failure of these specific implementations doesn't invalidate the core utility of the underlying technology. Just as the Dot-Com bubble's burst didn't mean e-commerce was fundamentally worthless, the inevitable collapse of many AI ventures won't mean the technology itself lacks utility...\"\n\nThis approach acknowledges the valid criticisms while maintaining your core argument, creating a more collaborative tone that invites skeptics to engage with your broader point rather than feeling defensive.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"42","tokens":"5,057","cost":"0.02","total_time_second":11.03},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]},{"id":"e572b36d-e2df-4d44-8a54-fb9497a279d3","created_at":"2025-03-05T20:08:33Z","state":"done","prompt":"This is great, thank you.\n\nHere's a rewritten conclusion at the end. I tried to include your points on reframing \"useless\" into a more useful criticism. Do you think this is enough, or did you mean something else?\n\nSo, what should we argue about, then? In my view, there’s no lack of easy targets or hard questions to be had here. The elephant in the room is the environmental impact of planning to spend double-digit percentage points of the national power grid capacity on AI computations. The social impact of replacing (skilled or unskilled) labor with computers, the impact on the effectiveness of companies doing this too early and making expensive mistakes due to AI not being suited for these purposes, the impact on IT Security and maintainability of having AI write your code for you, the impact on education when people can just generate a complete essay, … there are lots and lots of criticisms to choose from. Pick one. Hell, pick all of them. Go to town.\n\nOr go deeper on where your feeling of “AI is useless” comes from. Is it that you expect it to over-promise and under-deliver? Is it the ecosystem of grifters looking for easy money around it? Is it the fact that the system will just make things up if it runs out of ideas? Or have you tried that specific feature someone else is touting and were disappointed? Why?\n\nThese are problems that we need to address, and many of these problems are also seen as problems by AI proponents, and are being actively worked on - because these people aren’t dumb or evil. They see the technology through other eyes, they weigh the importance of different factors differently, or they have different expectations of the effects of future models on all of these issues. This isn’t a cabal of grifters out to steal money from Hard-Working Americans™, but many of them are people who believe in what they are doing.\n\nPersonally, I find reading the articles and listening to the podcasts of people who aren’t AI evangelists but are in the pro-AI camp quite helpful to update my mental models of what the “other side” is thinking. Stuff like the Complex Systems podcast with Patrick McKenzie on boom and bust cycles, or on AI and power economics (NB: I don’t consider this an overall fantastic podcast, but find the episodes interesting for learning what a proud capitalist with some libertarian leanings thinks about these issues, even if I don’t share these views). Similarly, reading what the founder of an AI lab sees as the potential upside of AI can be instructive, even if you don’t agree with them. I’m sure there are more good sources that I can’t remember off the top of my head (feel free to put them in the comments below or send them to me on Mastodon and I will add them here).\n\nHaving these discussions is important. However, having an agreed common reality is a prerequisite for that. Calling AI categorically useless sabotages this common reality, which will help no one. If we want to be heard, we have to meet people half-way, and acknowledging that AI can be a tool to solve some problems, while sometimes disagreeing on whether these problems should be solved or are solved well, can be a more fruitful base for discussions. Just because something is useful does not mean that you endorse it. It just means that you have to work a little harder and dig a little deeper in your criticism - and that will make it a better discussion for everyone.","reply":"<p>Your revised conclusion is excellent! The additions significantly strengthen your argument in several ways:</p>\n<ol>\n<li>\n<p><strong>Deeper exploration of the "useless" sentiment</strong></p>\n<ul>\n<li>The new paragraph that asks readers to examine where their feeling of "AI is useless" comes from is particularly effective</li>\n<li>The questions you pose ("Is it that you expect it to over-promise...?") help skeptics articulate their concerns more precisely</li>\n<li>This invites introspection rather than defensiveness</li>\n</ul>\n</li>\n<li>\n<p><strong>Clarification on usefulness vs. endorsement</strong></p>\n<ul>\n<li>Your final sentence makes an important distinction: acknowledging usefulness doesn't equal endorsement</li>\n<li>This addresses a key concern I mentioned about readers potentially seeing your argument as blanket support for AI adoption</li>\n</ul>\n</li>\n<li>\n<p><strong>Call for more nuanced criticism</strong></p>\n<ul>\n<li>The point that deeper criticism makes for better discussion reinforces your overall thesis</li>\n<li>It positions nuanced critique as more intellectually rigorous than dismissive statements</li>\n</ul>\n</li>\n</ol>\n<p>The revision maintains your collaborative tone while adding depth to your argument. It effectively bridges from identifying problems to suggesting a more productive approach to criticism.</p>\n<p>This conclusion now does exactly what I was suggesting - it helps readers transform a simplistic "useless" criticism into more substantive and specific concerns that can be meaningfully discussed. The addition is concise yet powerful, and it rounds out your article perfectly.</p>","md":"Your revised conclusion is excellent! The additions significantly strengthen your argument in several ways:\n\n1. **Deeper exploration of the \"useless\" sentiment**\n - The new paragraph that asks readers to examine where their feeling of \"AI is useless\" comes from is particularly effective\n - The questions you pose (\"Is it that you expect it to over-promise...?\") help skeptics articulate their concerns more precisely\n - This invites introspection rather than defensiveness\n\n2. **Clarification on usefulness vs. endorsement**\n - Your final sentence makes an important distinction: acknowledging usefulness doesn't equal endorsement\n - This addresses a key concern I mentioned about readers potentially seeing your argument as blanket support for AI adoption\n\n3. **Call for more nuanced criticism**\n - The point that deeper criticism makes for better discussion reinforces your overall thesis\n - It positions nuanced critique as more intellectually rigorous than dismissive statements\n\nThe revision maintains your collaborative tone while adding depth to your argument. It effectively bridges from identifying problems to suggesting a more productive approach to criticism.\n\nThis conclusion now does exactly what I was suggesting - it helps readers transform a simplistic \"useless\" criticism into more substantive and specific concerns that can be meaningfully discussed. The addition is concise yet powerful, and it rounds out your article perfectly.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"38","tokens":"6,095","cost":"0.03","total_time_second":7.83},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]},{"id":"5b9ba2df-82be-4b68-94c6-4f54611e7a40","created_at":"2025-03-05T20:08:55Z","state":"done","prompt":"Is there anything else I should improve about it?","reply":"<p>Your article is already quite strong, but here are a few final suggestions to consider:</p>\n<h2>Minor Refinements</h2>\n<ol>\n<li>\n<p><strong>Introduction Enhancement</strong></p>\n<ul>\n<li>Consider adding a brief sentence acknowledging why people might resort to the "useless" shorthand (perhaps due to frustration with hype or genuine concerns about AI's impacts) before explaining why it's counterproductive. This would demonstrate empathy with your audience from the outset.</li>\n</ul>\n</li>\n<li>\n<p><strong>Examples Balance</strong></p>\n<ul>\n<li>Your personal examples of AI usefulness are effective, but consider adding one example that would resonate specifically with your skeptical audience - perhaps a use case that aligns with values they likely hold (e.g., using AI to identify patterns in environmental data or to make technology more accessible).</li>\n</ul>\n</li>\n<li>\n<p><strong>Addressing Hallucinations</strong></p>\n<ul>\n<li>You briefly mention AI "making things up," but this is a significant concern for skeptics. Consider expanding slightly on this limitation while maintaining your overall argument (e.g., acknowledging that factual reliability remains a challenge while explaining how careful use can mitigate this).</li>\n</ul>\n</li>\n<li>\n<p><strong>Transition Smoothing</strong></p>\n<ul>\n<li>The transition between your "But This Specific Use..." section and "The Right Criticisms" section could be strengthened to more clearly connect these ideas.</li>\n</ul>\n</li>\n<li>\n<p><strong>Visual Elements</strong></p>\n<ul>\n<li>If your publishing platform allows, consider adding a simple visual element (like a comparison table of productive vs. unproductive criticisms) to reinforce your key points.</li>\n</ul>\n</li>\n</ol>\n<h2>Structural Consideration</h2>\n<p>The article flows well overall, but you might consider moving your personal examples of AI usefulness slightly earlier in the piece. This would establish your credibility as someone who has practical experience with the technology before diving into your main argument, potentially making skeptical readers more receptive.</p>\n<h2>Final Thought</h2>\n<p>Your article strikes an excellent balance between acknowledging legitimate concerns and advocating for more productive discourse. The conversational tone and personal examples create an accessible entry point for skeptics without being condescending. With these minor refinements, you'll have a compelling piece that could genuinely shift perspectives on how we critique emerging technologies.</p>","md":"Your article is already quite strong, but here are a few final suggestions to consider:\n\n## Minor Refinements\n\n1. **Introduction Enhancement**\n - Consider adding a brief sentence acknowledging why people might resort to the \"useless\" shorthand (perhaps due to frustration with hype or genuine concerns about AI's impacts) before explaining why it's counterproductive. This would demonstrate empathy with your audience from the outset.\n\n2. **Examples Balance**\n - Your personal examples of AI usefulness are effective, but consider adding one example that would resonate specifically with your skeptical audience - perhaps a use case that aligns with values they likely hold (e.g., using AI to identify patterns in environmental data or to make technology more accessible).\n\n3. **Addressing Hallucinations**\n - You briefly mention AI \"making things up,\" but this is a significant concern for skeptics. Consider expanding slightly on this limitation while maintaining your overall argument (e.g., acknowledging that factual reliability remains a challenge while explaining how careful use can mitigate this).\n\n4. **Transition Smoothing**\n - The transition between your \"But This Specific Use...\" section and \"The Right Criticisms\" section could be strengthened to more clearly connect these ideas.\n\n5. **Visual Elements**\n - If your publishing platform allows, consider adding a simple visual element (like a comparison table of productive vs. unproductive criticisms) to reinforce your key points.\n\n## Structural Consideration\n\nThe article flows well overall, but you might consider moving your personal examples of AI usefulness slightly earlier in the piece. This would establish your credibility as someone who has practical experience with the technology before diving into your main argument, potentially making skeptical readers more receptive.\n\n## Final Thought\n\nYour article strikes an excellent balance between acknowledging legitimate concerns and advocating for more productive discourse. The conversational tone and personal examples create an accessible entry point for skeptics without being condescending. With these minor refinements, you'll have a compelling piece that could genuinely shift perspectives on how we critique emerging technologies.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"41","tokens":"6,561","cost":"0.03","total_time_second":11.09},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]},{"id":"dcd5c897-e69d-482a-863d-e180118a24d3","created_at":"2025-03-05T20:11:09Z","state":"done","prompt":"On the first point: I adapted the first paragraph to read like this:\n\nA common shorthand I am seeing in criticisms of AI technologies is that AI / LLMs are “useless”. This shows up again and again, especially in hot takes on Mastodon, and sometimes from people whose professional work I deeply respect. While I understand that, to some, it might be a shorthand for a more thought-out criticism, this argument still annoys me, and I think it hurts the credibility of people who fundamentally want to do something good: ask if a disruptive new technology that has big social, legal, and also environmental impact is actually “good”, “worth it”, or whatever else you want to call it. This is my case for why we should stop using this argument and talk about the (very real!) issues with AI differently.\n\nDoes this match what you mean?","reply":"<p>Yes, that's exactly what I meant! This revised paragraph is excellent and accomplishes several important things:</p>\n<ol>\n<li>\n<p><strong>Shows empathy</strong> - By acknowledging that "useless" might be a shorthand for more nuanced criticism, you demonstrate that you're not dismissing the underlying concerns of your audience.</p>\n</li>\n<li>\n<p><strong>Maintains your position</strong> - You still clearly state your view that this shorthand is problematic while respecting that it comes from people whose work you value.</p>\n</li>\n<li>\n<p><strong>Sets a collaborative tone</strong> - The phrasing suggests you're inviting readers to a more productive conversation rather than simply criticizing their approach.</p>\n</li>\n<li>\n<p><strong>Clarifies your intent</strong> - You make it clear that you're not defending AI from all criticism, but rather advocating for more effective ways to discuss its very real issues.</p>\n</li>\n</ol>\n<p>This adaptation perfectly addresses my suggestion and strengthens your introduction by establishing common ground with skeptical readers right from the start. It shows that you understand where they're coming from while still making a case for a different approach to the conversation.</p>","md":"Yes, that's exactly what I meant! This revised paragraph is excellent and accomplishes several important things:\n\n1. **Shows empathy** - By acknowledging that \"useless\" might be a shorthand for more nuanced criticism, you demonstrate that you're not dismissing the underlying concerns of your audience.\n\n2. **Maintains your position** - You still clearly state your view that this shorthand is problematic while respecting that it comes from people whose work you value.\n\n3. **Sets a collaborative tone** - The phrasing suggests you're inviting readers to a more productive conversation rather than simply criticizing their approach.\n\n4. **Clarifies your intent** - You make it clear that you're not defending AI from all criticism, but rather advocating for more effective ways to discuss its very real issues.\n\nThis adaptation perfectly addresses my suggestion and strengthens your introduction by establishing common ground with skeptical readers right from the start. It shows that you understand where they're coming from while still making a case for a different approach to the conversation.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"39","tokens":"6,976","cost":"0.03","total_time_second":5.99},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]},{"id":"1dff8fcf-bb1b-4dfd-b092-1d4a43442da1","created_at":"2025-03-05T20:13:19Z","state":"done","prompt":"Do you have any idea for what a header image for this article might look like? ","reply":"<h1>Header Image Ideas for "Stop Calling AI 'Useless'"</h1>\n<p>Here are several conceptual ideas for a header image that would visually represent your article's themes:</p>\n<h2>1. Bridge Between Perspectives</h2>\n<p>A visual metaphor showing two cliffs with people on either side, connected by a bridge under construction. One side could have symbols representing AI skeptics, the other AI proponents, with the bridge representing productive dialogue. This captures your core message about finding common ground.</p>\n<h2>2. Toolbox Concept</h2>\n<p>A modern toolbox with various AI tools alongside traditional tools (hammer, screwdriver, etc.), suggesting that AI is simply another set of tools with specific uses and limitations. This reinforces your point about usefulness without endorsing every application.</p>\n<h2>3. Conversation Visualization</h2>\n<p>Speech bubbles showing the evolution of criticism from simplistic ("AI is useless!") to nuanced ("AI's environmental impact needs addressing"), perhaps with a gradient color shift from red to green, symbolizing the move from unproductive to productive discourse.</p>\n<h2>4. Venn Diagram</h2>\n<p>A simple, elegant Venn diagram showing the overlap between "Valid AI Concerns" and "Productive Discourse," with "AI is useless" sitting outside both circles. This directly visualizes your argument in a clean, minimalist way.</p>\n<h2>5. Magnifying Glass Over AI Application</h2>\n<p>A magnifying glass examining an AI application in detail, suggesting the need for closer, more nuanced examination rather than dismissive generalizations. This represents your call for deeper criticism.</p>\n<h2>6. Split Screen Reality</h2>\n<p>A split image showing the same AI tool from two perspectives: one side showing its practical application solving a real problem, the other showing legitimate concerns about its implementation. This captures your point about acknowledging both usefulness and valid criticisms.</p>\n<p>Any of these concepts could be executed in various styles (illustration, photography, or minimal graphic design) depending on your publication's aesthetic and your personal preference.</p>","md":"# Header Image Ideas for \"Stop Calling AI 'Useless'\"\n\nHere are several conceptual ideas for a header image that would visually represent your article's themes:\n\n## 1. Bridge Between Perspectives\nA visual metaphor showing two cliffs with people on either side, connected by a bridge under construction. One side could have symbols representing AI skeptics, the other AI proponents, with the bridge representing productive dialogue. This captures your core message about finding common ground.\n\n## 2. Toolbox Concept\nA modern toolbox with various AI tools alongside traditional tools (hammer, screwdriver, etc.), suggesting that AI is simply another set of tools with specific uses and limitations. This reinforces your point about usefulness without endorsing every application.\n\n## 3. Conversation Visualization\nSpeech bubbles showing the evolution of criticism from simplistic (\"AI is useless!\") to nuanced (\"AI's environmental impact needs addressing\"), perhaps with a gradient color shift from red to green, symbolizing the move from unproductive to productive discourse.\n\n## 4. Venn Diagram\nA simple, elegant Venn diagram showing the overlap between \"Valid AI Concerns\" and \"Productive Discourse,\" with \"AI is useless\" sitting outside both circles. This directly visualizes your argument in a clean, minimalist way.\n\n## 5. Magnifying Glass Over AI Application\nA magnifying glass examining an AI application in detail, suggesting the need for closer, more nuanced examination rather than dismissive generalizations. This represents your call for deeper criticism.\n\n## 6. Split Screen Reality\nA split image showing the same AI tool from two perspectives: one side showing its practical application solving a real problem, the other showing legitimate concerns about its implementation. This captures your point about acknowledging both usefulness and valid criticisms.\n\nAny of these concepts could be executed in various styles (illustration, photography, or minimal graphic design) depending on your publication's aesthetic and your personal preference.","metadata":{"model":"Claude 3.7 Sonnet","version":"claude-3-7-sonnet-20250219","speed":"45","tokens":"7,422","cost":"0.03","total_time_second":9.65},"profile":{"id":null,"name":null,"model":"claude-3-7-sonnet","model_name":"Claude 3.7 Sonnet (reasoning)","model_provider":"anthropic","model_input_limit":1000000,"scorecard":{"speed":3.0,"accuracy":3.0,"cost":2.0,"context_window":3.0,"privacy":3.0,"description":"retired","recommended":false},"model_provider_name":"Anthropic","internet_access":false,"personalizations":true,"shortcut":null,"is_default_profile":false},"search_query":null,"citations":[],"documents":[]}]