On October 8, 2025, SEI Atlanta’s Katie Fullerton spoke with 11Alive in connection with the Georgia Senate’s study on social media and AI safety for kids. The segment highlights growing concerns about how children interact with chatbots and what practical safeguards can help families and schools.
As both a parent and AI technologist, Katie emphasizes the real-world complexity of monitoring AI tools and the need for thoughtful, actionable guardrails that protect children without blocking beneficial uses of technology.
Why AI Safety for Children Matters
- Kids can encounter AI systems across devices, apps, and websites, which makes simple content filters or single-app controls insufficient.
- Lawmakers and schools are evaluating practical steps to keep children safe while enabling learning and creativity.
How SEI Helps Organizations Implement Responsible AI
SEI partners with enterprises, public sector organizations, and school systems to design and implement responsible AI programs that are practical and defensible.
- AI Readiness and Risk Assessments
Current-state reviews of data, workflows, and AI usage to identify risks and opportunities, aligned to business goals and policy requirements. - AI Governance and Policy Design
Clear guidance for safe use cases, guardrails for staff and vendors, and decision frameworks for model selection and oversight. - Privacy, Consent, and Youth-Safety Controls
Operating patterns for preference and consent management and privacy-by-design practices that support age-appropriate experiences. - Secure Architecture and Vendor Evaluation
Reviews of integrations, data flows, and third-party tools to reduce exposure while enabling data and AI capabilities. - Leadership Enablement and Team Training
Executive briefings and practical workshops to align stakeholders on responsible AI adoption and change management.
Watch the full video courtesy of 11Alive:
