We want dragon to interface with parents via text messaging, beginning with iMessage.
- research and prototype how to do iMessaging with Dragon.
- 0.5 - 1 day research: iMessage API / AppleScript solutions, Twilio APIs, pricing, limitations
- 3 days: iMessage send + receive prototype
- 1 day: Twilio send + receive prototype
- 2 days: tidy up prototypes, messaging server scaffolding, integration with SAGA, etc.
- Total: ~7 days, Deliverable: a websocket server for sending and receiving messages.
- Use LLM and emojis to answer to parent with character. Draw on SAGA memories.
- ~1-2 days for initial setup + final polish (scaffolding for passing messages through LLM, prompt engineering to get the tone we want, emoji representations - maybe through LLM, maybe outside)
- ~2 days for getting a full status report from SAGA / videocall server (as the Dragon, what do I see now, am I currently on a call, what’s the last conversation I had, maybe even the event log, etc.) passed to LLM as context.
- ~1-2 days for testing
- Total: 4-6 days, this one’s a little blurrier scope-wise, but I think we can get something going in 4ish days, and maybe it’ll give us ideas for things we want to ask Dragon. Added types of questions might add time. Deliverable: Dragon answers parent questions it can answer using yassified text messages (maybe limited to scope of what we can ask Dragon in calls + 1-2 extra.) Reasonable answers for when it can’t answer a question (doesn’t know, it’s inappropriate, etc.)
- List of possible features:
When was your last conversation?
, What did you talk about?
What are you thinking about?
, What do you see?
, Send a picture!
Are you on a call now?
Can you call me?
/ Can you call me at 5:30?
Have you ever talked about fairytales / death / windows?
- Let parent set boundaries (off limit subsumptions/concepts)
- 1 day: classify text input as boundary setting vs question (vs other possible categories)
- 1-2 days: turn message into structured data for passing to SAGA (as a first step, blacklist of concepts to be avoided?)
- Total: ~2-3 days for MVP, with possibility to increase based on more well defined scope. Deliverable: Dragon takes in text messages, recognizes that the parent is trying to blacklist one or more concepts, creates a list of blacklisted concepts and passes it on to SAGA.
- Let parent be able to suggest chores that Dragon should massage into next challenge.
- 1-2 days: extend input classification to
challenge suggestion
+ turn parent’s suggestion into simple challenge text using LLM
- ???
- Total: ~2 days for MVP, maybe this one is least defined.
Total: 16 days but likely to increase to ~20 or so (or more) based on learnings from MVPs.
We want dragon to make outgoing video calls. This will enable callback behavior and allowing parents to schedule calls to themselves or gift calls to other families.
- 1-3 days for prototype, depending on how difficult it is to call the right person with AppleScript FaceTime automation
- 1 day for full integration with server / exposing APIs / polish / etc.
- more for extra features (scheduling, either from SAGA or text message from parent
please call Silas at 7pm
, etc.)
Total: 2-4 days for MVP, more for extra features. Deliverables: API endpoints for videocall server which trigger the Dragon to call a specific user, now or at a scheduled time.
Research and prototype viability of other commonly used video calling platforms. (WhatsApp, Zoom? Telegram?) This is about seeing what might be holistically better pathways to video-call + texting, and learning what it will take to get Dragon to call users across disperate services (esp for Android users in mind)
- 1-2 days research
- Timebox to 1-2 days per platform for small prototypes
Total: Hard to estimate now, I think start with 1-2 days research and re-assess.
We want to scale FaceTime to support multiple dragons (5-10). Is there a solution that is not just having 10 mac minis and 10 icloud accounts?