Edit Game
Agentic AI - Conversation questions
 Delete

Use commas to add multiple tags

 Private  Unlisted  Public



 Save

Delimiter between question and answer:

Tips:

  • No column headers.
  • Each line maps to a question.
  • If the delimiter is used in a question, the question should be surrounded by double quotes: "My, question","My, answer"
  • The first answer in the multiple choice question must be the correct answer.






 Save   21  Close
Imagine a future where “agentic AIs” are coworkers. Would you prefer working alongside a human or an AI? What qualities would you expect from the AI coworker (skills? ethics? transparency?)
If tomorrow you had to choose — only human-made content or AI-generated content for everything you read — which would you pick and why?
If you could ask one question to an AI anonymously, what would you ask?
Do you trust AI companies when they say their systems are safe and unbiased?
If you were in charge of regulation, would you scale down the pace of AI deployment — or encourage rapid growth?
Suppose you have to pitch (create a pitch-deck) for an AI-based startup that claims “ethical AI.” What arguments would you include to convince investors?
Do you think people should consult AI for creative or emotional tasks (therapy, companionship, decision-making)? Why or why not?
How should companies offset the risks of AI (ethical, social, environmental)? What strategies or policies would you propose if you were leading a business that uses AI?
Could the widespread use of powerful, poorly supervised AI undermine human jobs — or will it create more opportunities?
If AI becomes truly autonomous and decision-making, should we consider giving such AI some form of “rights” or “welfare,” as some recent research suggests?
What does “agentic AI” mean to you — do you view AI as just a tool, or potentially as an autonomous “agent”?
Do you think society might write off generative AI as a useful tool because of scandals — or will companies find a way to regain trust?
If an AI system makes a mistake that harms someone (e.g. misinformation, bias), should the company be responsible — or also the users who trusted it?
Could “giving AI the benefit of the doubt” be dangerous? Or is it sometimes necessary to push innovation forward — what’s your take?
In the article, many raters discourage others from using AI for sensitive tasks (like medical advice). Should regulations cap certain uses of AI (e.g. health, legal)? What might those caps look like?
Do you think there’s a moral or social obligation for users (and companies) to be transparent about the inner human labour behind AI, or is that unnecessary risk for competitiveness?
Some workers argue that AI is “fragile, not futuristic” after seeing how it’s built. Do you agree with that view? Why or why not?
Can you size up — in broad strokes — the pros and cons of deploying “agentic AI” (AI systems capable of autonomous decisions) in business? What are the red flags?
If you were a CEO of a tech startup, would you resort to using partially trained AI models to rack up quick profits — or wait until they meet high ethical standards? Explain your reasoning.
Have you ever had a hunch that a technological tool (not just AI) wasn’t as safe or reliable as advertised? What happened?
Do you think companies should hold off on releasing new AI tools until they are more thoroughly tested? Why or why not?