AI Policy
Last updated: 14 May 2026
Important: these pages are not legal advice. Dummy company name, address, CIN, GSTIN, and officer names are placeholders — have qualified counsel verify them, align subprocessors and retention, and complete any India DPDP / intermediary registrations before you rely on them commercially.
This AI Policy explains how Cosmo Tara Private Limited uses artificial intelligence and related automation in the Tara product, how we think about safety and transparency, and what you should expect when you interact with a digital companion. Read it alongside our Privacy Policy, Terms of Service, and Disclaimer.
1. Purpose
Tara’s mission is to offer culturally grounded, chart-aware guidance at companion scale. AI lets us translate complex planetary context into natural conversation, voice, and ongoing continuity. This Policy sets expectations; it is not a technical whitepaper and may summarize behaviors that evolve with product releases.
2. Transparency: you are talking to software
Tara is software that presents a consistent persona. She is not a human astrologer and we do not operate a marketplace of human readers in the core product. Where optional humans ever participate in future experiments, we will label those flows conspicuously; until then, assume all guidance is automated.
3. How responses are produced
- Chart and timing context is computed using astronomical libraries compatible with Swiss Ephemeris quality standards (for example pyswisseph in our stack). Accuracy depends on the birth or event data you supply.
- Language and reasoning layers use large language models and related tooling (for example Google Gemini / Google AI services) to phrase insights, answer follow-ups, and keep tone aligned with Tara’s character.
- Safety and policy filters may preprocess or postprocess prompts and outputs to reduce clearly unsafe or abusive content. Filters are imperfect.
4. Beneficence, non-maleficence, and fairness
We design Tara to be supportive and respectful. We prohibit uses that generate harassment, hate, credible threats, self-harm instructions, or illegal activity. We strive to reduce biased or demeaning outputs across protected characteristics; models can still err, and we welcome reports so we can improve guardrails.
5. Human oversight model
Quality and safety rely on product, engineering, and policy review (prompting, evaluators, incident response). This is distinct from a claim that a human astrologer personally reviews each reading in real time.
6. Training, logging, and improvement
We may log interactions to debug issues, prevent fraud, and measure quality. Where we use conversation data to improve models, we will rely on aggregated, de-identified, or appropriately consented datasets consistent with our Privacy Policy and vendor terms. You should assume that prompts and outputs necessary to fulfill a session may be processed by subprocessors in real time.
7. Limitations you must understand
- Not professional advice. Tara is not a doctor, therapist, lawyer, financial adviser, or cleric. Seek qualified help for medical, mental health, legal, or investment decisions.
- Not deterministic. Astrology is interpretive; AI adds stochastic language. Different phrasings may emphasize different nuances without changing underlying calculations.
- May be mistaken. Models hallucinate; ephemeris inputs may be wrong if your birth time is uncertain. You remain responsible for consequential choices.
- Not a crisis service. If you or someone else is in immediate danger, contact local emergency services.
8. Your responsibilities
- Provide accurate chart inputs and update them if you learn better data.
- Do not attempt to jailbreak models, exfiltrate system prompts at scale, or automate abusive traffic.
- Treat outputs as suggestions for reflection, not commands. Cross-check important calendar or ritual details independently when stakes are high.
9. Data practices specific to AI
Personal data supplied in chat may be sent to model providers to generate the next token. Those transfers are governed by our Privacy Policy and the provider’s documentation. We do not sell your personal data.
10. Monitoring and feedback
We may sample or instrument sessions for safety monitoring subject to policy. If you see harmful, false, or biased outputs, email [email protected] with approximate time, topic, and (if possible) a screenshot. Good-faith reports help us tune prompts and filters.
11. Children
Tara is not directed to children under the age required in your jurisdiction. AI companions can be especially risky for minors; parental controls and age gates should be implemented in the shipping apps per counsel guidance.
12. Changes
AI regulation and vendor capabilities evolve. We may update this Policy and will revise the “last updated” date. Material changes will be communicated as described in our Terms or Privacy Policy.