Oursky

BLOG

4 Essential AI Testing Prompts to Streamline Your QA Process

Joyz Ng

In today's fast-paced software development landscape, efficient testing isn't just a technical necessity—it's a business imperative. Whether you're a business owner looking to cut costs, a project manager seeking higher efficiency, or a QA professional aiming to streamline your workflow, AI-powered testing prompts can transform your quality assurance process.

Why This Matters to Your Bottom Line

  • 🎯 Cost Reduction: Automate test case generation and reduce manual planning time
  • Increased Efficiency: Generate comprehensive test scenarios in minutes, not hours
  • 📊 Better Coverage: Ensure thorough testing with AI-generated edge cases
  • 🔄 Consistency: Standardize testing approaches across teams and projects

⚠️ Security Advisory: Protecting Your Test Information

Before diving into the prompts, an important note on security:

  • Keep Information Confidential: Testing prompts often contain sensitive information about your systems, business logic, and potential vulnerabilities
  • Use Trusted Sources: Only share testing information with trusted, secure LLM providers or use self-hosted models
  • Sanitize Data: Remove sensitive data, credentials, and internal identifiers from prompts
  • Consider Self-Hosting: For maximum security, consider using self-hosted models for test case generation, such as Ollama.

Remember: Your test cases are a blueprint of your system. Treat them with appropriate security measures.

Self-hosted Local AI Chatbot

Now let's dive into 4 essential testing scenarios and see how AI prompts can assist your approach.

1: Post-Sprint Exploratory Testing

For finding bugs on new features and changes after a sprint.

The Prompt:

This is the updates done for the project [Project Name] in the past sprint, please summarize the updates, and list the corresponding test cases to test the updated functions manually. Print it in a document.

Use the prompt with uploading a CSV of issues that you have completed in the sprint, including the title and description of each issue.

Quick Tips:
  • Provide all updated features as context.
  • Compare results from different LLMs.

2: Pre-Release Smoke Testing

For quick validation of critical functionalities before release.

The Prompt:

You are a product owner going to release a new version of [Project Name]. Generate a set of Smoke Test cases for manual QA to ensure these main flows work without any critical issues. Categorize the test cases and divide into sections. Make a checklist for each section with columns ID, Scenario, Expected behaviour, Pass/Fail and Remarks. Keep it short and clean.
Core functionalities:
[List your core features here]

Quick Tips:
  • List all core features.
  • Keep the list in 20 items, dont' be too long.

3: Complex Feature Testing

For thorough testing of features with complex business rules, requires a high coverage and deepness.

The Prompt:

You are a manual QA tester. Given the user flow and rules of the system, create a set of comprehensive functional test cases to test each step in the user flow, covering positive test scenarios, negative test scenarios, edge cases and additional cases.
Categorize the test cases and divide into sections. Each section shows a table format with a meaningful "ID", "Test Case", "Sample Test Data" and "Expected Behaviour" columns. Keep the sentences short and clean.
Happy flow:
[List your user flow steps]
Rules:
[List your business rules]

Quick Tips:
  • Include a complete happy flow, but start from the feature's entry point (not from system login).
  • List all validation requirements and business rules.
  • Break down large features into smaller, focused test suites for better depth.
  • Try multiple prompt attempts and combine the best test cases for comprehensive coverage.

4: Post-Bug-Fix Regression Testing

For ensuring bug fixes don't break related functionalities.

The Prompt:

You are a manual QA tester. Below is a bug reported. The bug is already fixed by developer. Please suggest a list of regression test cases to check if any related area is affected after the fix. Return your results in a checklist table with columns "ID", "Aspect", "Description", "Expected Result", "Priority".
Original bug report:
[Include full bug report with environment details and steps to reproduce]

Quick Tips:
  • Include the complete bug report with title, environment details and testing platforms, description, steps to reproduce and expected behaviours.
  • Focus on the issues one by one, don't retest multiple issues at a time.
  • Use IssueSnap to help you create a clear and informative bug report.
IssueSnap - Your Bug Report Helper

Common Pitfalls to Avoid

  1. Too Vague: "Test the login feature" vs. "Test the login feature including password requirements, error handling, and session management"
  2. Missing Context: Always include relevant business rules and technical constraints
  3. Unclear Priorities: Specify which scenarios are most critical
  4. Incomplete Information: Include all necessary details about the feature or bug

Remember: The quality of your test cases directly depends on the quality of your prompts. Take time to craft detailed, specific prompts that clearly communicate your testing needs and constraints.

At the heart of Oursky is the art of software engineering: translating ideas into opportunities. We work with startups in bringing their ideas to life with the right blend of people, process, and technology — from MVP development to scalable digital products powered by the latest technologies. Schedule a no-commitment consultation with us and we can explore how to creatively harness the digital space to solve real-world problems.

Share

Discuss what we could do for you.