Effective Date: 5/30/25

CapSource is committed to ensuring the responsible, transparent, and privacy-conscious use of Artificial Intelligence (AI) across our platform. As we explore new ways to enhance experiential learning and streamline collaboration between educators, students, and industry partners, we prioritize ethical data handling, clear user communication, and institutional compliance.


How CapSource Uses AI

CapSource currently uses or is actively developing AI-powered features for the following purposes:

  • Curriculum Co-Creation: Generating draft learning outcomes, timelines, and project briefs based on educator or industry input

  • Case Study & Mentorship Material Generation: Producing discussion guides, industry context briefs, and learning artifacts to support student development

  • Program Design Assistance: Helping educators translate experiential learning goals into structured programs and RFPs for prospective partners

  • Platform Support Chatbots: Offering just-in-time help and guidance for students, educators, and employers navigating the CapSource experience

All AI features are designed to assist human users, not replace their decisions.


Data Privacy & Protection

CapSource does not use personal or institutional data to train, fine-tune, or adapt any third-party or proprietary machine learning models.

Our commitments include:

  • No Model Training on User Data: Inputs provided by users (e.g., goals, prompts, documents) are used transiently for session-specific processing and are not stored for future training or analysis.

  • No Processing of Sensitive PII: AI tools do not access or analyze sensitive personal data such as grades, demographic data, financial information, or student evaluations.

  • Limited Metadata Use Only: Where applicable, AI functions may operate on non-identifying metadata (e.g., project categories, industries, topics) to support search and tagging.


User Control & Transparency

  • No Automated Decision-Making: CapSource does not use AI to make high-stakes decisions such as student matching, grading, or partner selection. All AI outputs are presented as optional, editable content for human review.

  • Future Opt-Outs: If AI-powered features are introduced that affect workflows more directly, users will be given the ability to opt-out of AI-supported interactions where feasible.

  • No Institutional Profiling: We do not build AI profiles of institutions, educators, or students based on past usage or performance data.


Governance & Model Oversight

  • Internal Review: All AI features undergo privacy and risk assessments as part of our product development lifecycle.

  • Model Risk Mitigation: Our team maintains documentation for every AI integration to ensure safeguards, fairness, and non-discrimination.

  • Vendor Accountability: Where third-party models (e.g., OpenAI, Google) are used, we ensure appropriate contractual controls, including no data retention or cross-training, as part of our vendor vetting.


Questions or Concerns?

We welcome inquiries about our AI usage policies. Please contact us at [email protected] for more information or to request data disclosures related to AI-supported features.