Skip to content

Day 1 Overview: Foundation & AUTOMAT

Duration: 1.5 hours
Focus: Building prompt engineering foundations and learning your first framework


Learning Objectives

By the end of Day 1, you will:

Understand why prompt engineering matters for R&D efficiency
Apply the AUTOMAT framework to functional scientific tasks
Use conversational learning to build expertise, not just get answers
Protect IP using the Red List Protocol
Practice in the sandbox environment with real scenarios


Session Structure

Part 1: Foundation (30 minutes)

Introduction to Prompt Engineering (15 min) - What is prompt engineering? - Why it matters for materials scientists - Efficiency gains and quality improvements - The learning journey ahead

Sandbox Setup (15 min) - Access your local AI environment - First prompts and experimentation - Understanding model behaviour


Part 2: AUTOMAT Framework (40 minutes)

Framework Introduction (15 min) - Seven components: Audience, User, Task, Output, Method, Assumptions, Tone - Why structured prompts outperform casual requests - Scientific method parallels

Materials Science Applications (15 min) - Literature data extraction - Tensile testing analysis - SEM image interpretation - Complete AUTOMAT examples

Hands-On Practice (10 min) - Build your first AUTOMAT prompt - Test in sandbox - Refine based on output


Part 3: Conversational Learning & Security (20 minutes)

From Transaction to Learning (10 min) - Why "just give me the answer" fails - The power of "why" questions - Building transferable expertise

Responsible AI & IP Protection (10 min) - The Red List Protocol - What never leaves your network - How to sanitise sensitive data - Sandbox vs. external tools


Key Concepts

1. Prompt Engineering is Not About "Talking Nicely to AI"

It's about: - Precision specification (like experimental protocols) - Explicit constraints (like defining experimental boundaries) - Structured frameworks (like the scientific method)


2. AUTOMAT = The Scientific Method for AI

Scientific Method AUTOMAT Framework
Define hypothesis Task
Design experiment Method
Specify measurements Output
Control variables Assumptions
Document rigorously Tone + Audience

3. Conversational Learning > Task Completion

Don't ask: "What's the best solvent for PLA?"
Ask: "Why is DMF preferred? What are the trade-offs? When would alternatives be better?"

Outcome: Transferable decision framework, not just a single answer


4. IP Security is Non-Negotiable

Red List items NEVER go to external AI: - Unpublished data - Patent-pending processes - Customer information - Financial details - Proprietary formulations

Use the sandbox for sensitive work!


What You'll Build Today

Template Library (Started)

By end of Day 1, you'll have:

  • Literature data extraction template
  • Lab notebook formatting template
  • Risk assessment checklist
  • First conversational learning scripts

Real Efficiency Gains

Traditional approach:

  • Literature review: 4-6 hours
  • Protocol formatting: 20-30 min per protocol
  • Multiple iterations to get format right

With AUTOMAT:

  • Literature review: 1-1.5 hours (75% reduction)
  • Protocol formatting: 3-5 min per protocol (85% reduction)
  • First-shot success rate >70%

Today's Exercises

You'll practice with:

  1. Literature Data Extraction (AUTOMAT)

  2. Extract synthesis parameters from 8 papers

  3. Structured output for competitive analysis

  4. Hallucination Hunt

  5. Find 5 deliberate errors in an AI-generated report

  6. Learn verification techniques

  7. Red List Assessment

  8. Evaluate 5 scenarios for IP risk

  9. Practice sanitisation strategies

  10. Prompt Refinement Challenge

  11. Transform a vague prompt into high-quality AUTOMAT prompt

  12. A/B test in sandbox

Materials You'll Use

  • Sandbox environment: http://192.168.1.177:3000
  • Model: Llama 3.2 (3B) - running locally, 100% private
  • Red List Protocol: Reference guide
  • Cheat Sheet: Quick reference

Success Criteria

You're ready for Day 2 when you can:

✅ Explain why structured prompts outperform casual requests
✅ Write a complete AUTOMAT prompt for a functional task
✅ Identify Red List violations in scenarios
✅ Use "why" questions to build learning dialogues
✅ Navigate the sandbox confidently


Common Questions

"Isn't this overkill for simple tasks?"

For one-sentence tasks, yes. But most R&D work isn't simple. Frameworks:

  • Ensure reproducibility
  • Reduce iteration time
  • Create reusable templates
  • Protect against hallucinations

"How long until this feels natural?"

Week 1: Frameworks feel mechanical (checking cheat sheet frequently)
Month 1: Frameworks become intuitive (70% first-shot success)
Month 3: You're building template libraries (5-10 hrs/week saved)

"Can I use ChatGPT/Claude/Gemini for this?"

For public information: Yes (literature, published methods)
For sensitive work: No - use the sandbox

Always check the Red List first!


Pre-Work (If Available)

Recommended reading before the session:

Time: 15-20 minutes

Note: Not required, but provides helpful background


Looking Ahead

Day 2 will build on today's foundation:

  • CO-STAR framework (for strategic communication)
  • Context deep dive (why it's critical)
  • Advanced hallucination detection

Day 3-4 will cover:

  • Technical architecture (how LLMs actually work)
  • Green AI (environmental impact & optimization)
  • Ethics, bias, and responsible deployment

Let's Begin!

Ready to transform how you work with AI?

Next: Introduction to Prompt Engineering