Consent & Liability: Template Language for AI Intake in Therapy Practices
Copy-ready consent language and guardrails for AI intake and scheduling: disclosures, boundaries, and escalation rules that protect patients and clinicians.
Quick answer
You can use AI intake safely if you give clear notice, obtain consent where needed, and set boundaries on what the assistant can and cannot do. Pair concise disclosures with escalation rules and a human backup. Below are starter templates and guardrails you can adapt for your practice. For HIPAA-aligned intake and scheduling, see PsyFi Assistant.
Core principles for consent
- Transparency: Explain that an AI assistant helps collect information and draft scheduling responses; a human reviews when needed.
- Scope: Limit to scheduling, basic eligibility, and FAQs. Exclude diagnosis, crisis counseling, medication guidance.
- Choice: Offer a human path at any time.
- Safety: Spell out that emergencies should not use the AI channel.
- Retention: State what you keep, for how long, and how to request deletion.
Starter consent/notice language (adaptable)
“We use an AI assistant to help with scheduling and intake. Your information is encrypted and may be reviewed by our staff before appointments are confirmed. Do not use this channel for emergencies—call 911 or your local emergency number. You can ask for a human at any time.”
Add specifics:
- Data retention (e.g., “We retain chat transcripts for 30 days for scheduling, then purge.”)
- Who sees the data (e.g., “Our scheduling team; no third parties beyond our HIPAA-covered vendors.”)
- How to opt out (e.g., “Reply ‘HUMAN’ to switch.”)
Boundaries to codify
- No crisis automation: Route crisis keywords to a human immediately with a standard crisis response.
- No clinical advice: Block diagnosis/medication prompts; provide a generic redirect (“Please discuss with your clinician”).
- Approval for changes: Staff approve reschedules/cancellations for certain appointment types.
- Geography/insurance rules: Enforce location and payer rules before confirming.
- Minors/guardianship: Require guardian consent; avoid storing sensitive minor disclosures in long-term memory.
Escalation rules (example)
- Crisis terms → auto-respond with crisis message + notify staff.
- Out-of-state or insurance not accepted → pause and route to staff.
- More than 2 reschedules in 30 days → staff approval required.
- New patient intakes with flagged conditions → staff review before confirmation.
Template snippets you can reuse
Emergency disclaimer (site footer / first message):
“This channel is not for emergencies. If you’re in crisis, call 911 or your local emergency number.”
Opt-out prompt:
“If you’d prefer a human, reply ‘HUMAN’ and we’ll take over.”
Reschedule confirmation:
“We can move your appointment. Here are the next available times: [link]. If none work, reply and our team will help.”
Insurance not accepted:
“We may be out of network for your plan. Our team will review and follow up.”
Minor/guardian:
“If this request is for a minor, please have a parent or legal guardian contact us to complete scheduling.”
Implementation checklist
- Add consent/notice to your website and first AI message.
- Configure crisis keywords and escalation routing; test it weekly.
- Set transcript retention and purge schedule; disable training on PHI.
- Train staff on when to take over, how to log actions, and how to export/delete data on request.
- Run a one-week pilot with staff test data; then roll out to a limited patient group.
Where PsyFi fits
PsyFi Assistant provides HIPAA-aligned intake and scheduling with:
- Configurable consent banners and first-message disclosures.
- Guardrails for crisis terms, insurance, geography, and minors.
- Human-in-the-loop approvals for changes and reschedules.
- Encryption, RBAC, audit logs, and retention controls.
Explore: PsyFi Assistant, PsyFiGPT, and PsyFi Reports.
FAQ
Do I need explicit consent for SMS?
Yes. Collect SMS consent and keep reminders minimal on PHI; include time zone and a human fallback.
Can I let AI handle medication or diagnosis questions?
No. Redirect to a clinician. Keep intake to scheduling, eligibility, and logistics.
How should I handle minors?
Require guardian consent and avoid storing sensitive minor disclosures in long-lived transcripts.
What if patients refuse AI?
Provide a human option without penalty. Note preferences in their record to avoid re-prompting.
How often should I review scripts?
Quarterly, or after any incident. Run test transcripts through your crisis and escalation filters.
Frequently Asked Questions
- Do I need explicit consent for SMS?
- Yes. Collect SMS consent and keep reminders minimal on PHI; include time zone and a human fallback.
- Can I let AI handle medication or diagnosis questions?
- No. Redirect to a clinician; keep AI intake to scheduling, eligibility, and logistics.
- How should I handle minors?
- Require guardian consent and avoid storing sensitive minor disclosures in long-lived transcripts.
- What if patients refuse AI?
- Provide a human option without penalty; note preferences to avoid re-prompting.
- How often should I review scripts?
- Review quarterly or after any incident; run test transcripts through crisis/escalation filters.