Select the search type
  • Site
  • Web
Search

Path Steps

Follow these steps in order. Each one links to an EasyDNNnews article/video and gives you a quick, practical takeaway.

You’ll learn how to frame AI as a teammate that supports Scrum events and backlog work without replacing judgment or collaboration.
Do this exercise: Write a 3-sentence “AI usage policy” for your team (what you will use AI for, what you won’t, and what must be reviewed by a human).
You’ll learn repeatable prompt patterns to generate stories with clearer intent, constraints, and acceptance criteria.
Do this exercise: Take one messy request and prompt AI to produce (a) a user story, (b) 5 acceptance criteria, and (c) 3 key questions for the PO.
You’ll learn how to generate “plan options” (not commitments) and improve shared understanding of scope and dependencies.
Do this exercise: Ask AI for 2 sprint goal options based on your top backlog items, then pick one as a team and adjust wording together.
You’ll learn facilitation prompts that help teams extract insights, turn feedback into actions, and avoid “retro theatre.”
Do this exercise: Feed AI 5 bullet facts from the sprint and ask for (a) patterns, (b) 3 improvement experiments, and (c) 1 metric per experiment.
You’ll learn how to convert your best prompts and practices into a lightweight working agreement the team can actually follow.
Do this exercise: Create a “Prompt Library” page with 5 prompts: refinement, story writing, planning, review, retro—each with input/output examples.
 

Learning Path - Free

24 Feb 2026

Step 1: What AI Can (and Can’t) Do for Scrum Teams

AI is a productivity amplifier—not a Product Owner, not a Scrum Master, and not a Developer.

Used correctly, it accelerates learning, drafting, summarizing, and exploring options. Used poorly, it replaces thinking with automation theater.

This step helps your team position AI as a supporting teammate, not a decision-maker.

Author: Rod Claar
0 Comments
Article rating: No rating

24 Feb 2026

Step 2: Prompts That Produce Better User Stories

AI can help—but only if the prompt is structured.

This step introduces repeatable prompt patterns that improve:

  • Intent clarity

  • Constraints visibility

  • Acceptance criteria quality

  • PO alignment

Author: Rod Claar
0 Comments
Article rating: No rating

24 Feb 2026

Step 3: Backlog Refinement with AI (Without Losing the “Why”)

The Core Risk

When teams use AI in refinement, a common failure mode appears:

  • Stories get cleaner

  • Acceptance criteria get longer

  • Technical detail increases

  • Business intent becomes less visible

Scrum optimizes for value delivery, not documentation density.

AI must support the “why” behind the work.

Author: Rod Claar
0 Comments
Article rating: No rating

24 Feb 2026

Step 4: Sprint Planning Acceleration

The Key Principle

AI should propose:

  • Possible Sprint Goals

  • Possible scope groupings

  • Possible dependency flags

The team still decides:

  • What to commit to

  • What fits capacity

  • What aligns to product strategy

AI drafts.
The team commits.

Author: Rod Claar
0 Comments
Article rating: No rating
RSS

Learning Path - Member

 
 
✓ Featured Content

AI for Scrum and Agile Teams
Videos

A curated playlist of specific YouTube content.

Search Results

23 Apr 2025

Former OpenAI employees urge regulators to halt company’s for-profit shift

Author: Rod Claar  /  Categories: Uncategorized  /  Rate this article:
No rating

by 

 

Apr 23, 20255 mins

Artificial IntelligenceGenerative AIRegulation

OpenAI’s restructuring threatens to strip its nonprofit foundation of control over development of artificial general intelligence, violating its founding purpose, say researchers, nonprofit leaders, and former employees.

 

AI

Credit: JarTee/Shutterstock.com

 

A broad coalition of AI experts, economists, legal scholars, and former OpenAI employees is urging state regulators to keep OpenAI’s nonprofit foundation in control of the company.

Their concern: that the company’s planned restructuring would abandon its legally mandated nonprofit purpose and place control of artificial general intelligence (AGI) in the hands of private investors.

“We write in opposition to OpenAI’s proposed restructuring that would transfer control of the development and deployment of artificial general intelligence (AGI) from a nonprofit charity to a for-profit enterprise.” the coalition wrote in an open letter addressed to the Attorneys General of California and Delaware, who together are the company’s primary regulators.

The letter’s signatories include Nobel laureates Daniel Kahneman and Joseph Stiglitz and AI pioneers Geoffrey Hinton and Yoshua Bengio. They argue that the proposed restructuring would violate OpenAI’s Articles of Incorporation, which explicitly state the organization is “not organized for the private gain of any person.”

The coalition’s appeal is supported by a separate amicus curiae brief filed by twelve former OpenAI employees in an ongoing federal lawsuit. Together, the letter and brief present a rare, coordinated public challenge to the internal governance of one of the world’s leading AI companies.

 

A legally binding mission

OpenAI was created in 2015 as a nonprofit with a single, far-reaching goal: to ensure that AGI benefits all of humanity. Its 2018 Charter outlines principles such as broadly distributed benefits, long-term safety, cooperative development, and technical leadership. These values were designed to steer OpenAI’s work even as it began raising external investment.

In 2019, OpenAI adopted a capped-profit model, establishing a limited partnership structure under full control of the nonprofit board. This arrangement, the letter notes, was meant to ensure that AGI development would always remain aligned with the public interest.

According to the open letter, the company is now seeking to restructure in a way that would eliminate this charitable governance by allowing private shareholders to assume control of AGI development and deployment.

 

The authors argued that this shift is inconsistent with OpenAI’s charitable purpose and violates both California and Delaware nonprofit law.

Computerworld Smart Answers

 Learn more

Explore related questions

“As the primary regulators of OpenAI, you currently have the power to protect OpenAI’s charitable purpose on behalf of its beneficiaries, safeguarding the public interest at a potentially pivotal moment in the development of this technology,” the letter said. “Under OpenAI’s proposed restructuring, that would no longer be the case.”

Former employees validate governance concerns

The amicus brief, filed in April 2025, supports claims made in the open letter by offering firsthand accounts from within OpenAI’s leadership and research teams. The twelve former employees worked at the company from 2018 to 2024 and held roles ranging from research scientists to policy leads.

 

According to the brief, internal operations at OpenAI were built around the Charter. Employee performance reviews included assessments of how individuals advanced the mission, and senior leadership—including CEO Sam Altman—frequently referenced the Charter in strategic decisions.

But the brief also reveals a gradual shift in internal dynamics. The former employees claim that key governance principles began to erode as commercial interests grew, culminating in efforts to restructure in ways that would sever nonprofit control.

“Without control, the Nonprofit cannot credibly fulfill its Mission and Charter commitments, particularly those relating to broadly distributed benefits and long-term safety,” the brief stated.

 

The coalition’s letter closed with a call for legal action. It urged the Attorneys General to demand full transparency about OpenAI’s current and proposed structures. If OpenAI is no longer operating in line with its nonprofit obligations, the authors argue, the state must act to preserve the public mission.

“You currently have the power to protect OpenAI’s charitable purpose on behalf of its beneficiaries, safeguarding the public interest at a potentially pivotal moment in the development of this technology,” the letter said.

With AGI development accelerating, the outcome of this governance battle may shape not just OpenAI’s future but the trajectory of AI oversight worldwide. At stake is the principle that technologies capable of reshaping economies, labor, and societies should remain accountable to the public—and not be controlled solely by shareholder interests.

 

Whether legal authorities respond to the coalition’s plea could mark a turning point in how the world manages the power and responsibility of frontier AI development.

 

by 

Gyana is a contributing writer.

Print

Number of views (168)      Comments (0)

Tags:

Search

Calendar

«March 2026»
SunMonTueWedThuFriSat
22232425262728
1234567
891011121314
15161718192021
22232425262728
2930311234

Upcoming events

Upcoming AI Training

20 May 2026

Author: Rod Claar
0 Comments
Article rating: No rating

2 Apr 2026

Author: Rod Claar
0 Comments
Article rating: No rating
RSS

Two Ways

Keep Learning — Two Ways

Choose the free track to get new lessons as they’re released, or go deeper with a structured course that puts everything into a repeatable playbook.

Free
Join updates / get new lessons

Get notified when new steps, templates, and examples are added—so you can keep improving your AI skills one sprint at a time.

Join updates
No spam. Practical lessons only. Unsubscribe any time.