Study

Agentic AI - Conversation questions

  •   0%
  •  0     0     0

  • If AI becomes truly autonomous and decision-making, should we consider giving such AI some form of “rights” or “welfare,” as some recent research suggests?
  • How should companies offset the risks of AI (ethical, social, environmental)? What strategies or policies would you propose if you were leading a business that uses AI?
  • If you were in charge of regulation, would you scale down the pace of AI deployment — or encourage rapid growth?
  • Do you think companies should hold off on releasing new AI tools until they are more thoroughly tested? Why or why not?
  • Suppose you have to pitch (create a pitch-deck) for an AI-based startup that claims “ethical AI.” What arguments would you include to convince investors?
  • Can you size up — in broad strokes — the pros and cons of deploying “agentic AI” (AI systems capable of autonomous decisions) in business? What are the red flags?
  • Do you think people should consult AI for creative or emotional tasks (therapy, companionship, decision-making)? Why or why not?
  • Have you ever had a hunch that a technological tool (not just AI) wasn’t as safe or reliable as advertised? What happened?
  • If you could ask one question to an AI anonymously, what would you ask?
  • If an AI system makes a mistake that harms someone (e.g. misinformation, bias), should the company be responsible — or also the users who trusted it?
  • Imagine a future where “agentic AIs” are coworkers. Would you prefer working alongside a human or an AI? What qualities would you expect from the AI coworker (skills? ethics? transparency?)
  • Could the widespread use of powerful, poorly supervised AI undermine human jobs — or will it create more opportunities?
  • Could “giving AI the benefit of the doubt” be dangerous? Or is it sometimes necessary to push innovation forward — what’s your take?
  • Do you trust AI companies when they say their systems are safe and unbiased?
  • Do you think society might write off generative AI as a useful tool because of scandals — or will companies find a way to regain trust?
  • Do you think there’s a moral or social obligation for users (and companies) to be transparent about the inner human labour behind AI, or is that unnecessary risk for competitiveness?
  • If tomorrow you had to choose — only human-made content or AI-generated content for everything you read — which would you pick and why?
  • What does “agentic AI” mean to you — do you view AI as just a tool, or potentially as an autonomous “agent”?
  • Some workers argue that AI is “fragile, not futuristic” after seeing how it’s built. Do you agree with that view? Why or why not?
  • In the article, many raters discourage others from using AI for sensitive tasks (like medical advice). Should regulations cap certain uses of AI (e.g. health, legal)? What might those caps look like?
  • If you were a CEO of a tech startup, would you resort to using partially trained AI models to rack up quick profits — or wait until they meet high ethical standards? Explain your reasoning.