You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
TechDeck Academy – Your Personal Tech Learning System
A self-adapting GitHub-based boot camp that emails you tailored challenges, reviews your work in your chosen mentor voice, lets you “mail” questions to your AI teacher, tracks your progress with weekly, monthly and quarterly reports (with data you can graph), and reshapes your roadmap as you improve.
Getting Started
Clone the repo
git clone https://github.com/yourusername/techdeck-academy.git
cd techdeck-academy
flowchart TD
subgraph Repo
SUB[/submissions/]
QIN[/letters/to-mentor/]
QOUT[/letters/from-mentor/]
WK[/progress/weekly/]
MO[/progress/monthly/]
QT[/progress/quarterly/]
RM[/roadmap.md/]
end
subgraph Actions
A1[send-challenge.yml]
A2[process-submissions.yml]
A3[respond-to-letters.yml]
A4[generate-digests.yml]
end
subgraph Services
LLM[Google Gemini Flash 2.5]
MAIL[Resend Email API]
end
A1 --> LLM --> MAIL --> User(Receive Challenge)
User --> SUB --> A2 --> LLM --> MAIL --> User(Receive Feedback)
User --> QIN --> A3 --> LLM --> MAIL --> User(Receive Reply)
A4 --> WK & MO & QT & RM --> User(Review Reports)
Loading
6. Message Sequence
sequenceDiagram
participant You
participant GH as GitHub Actions
participant LLM as Gemini
participant MAIL as Resend
participant Repo
Note over GH: Trigger per schedule
GH->>LLM: Generate challenge(task, difficulty, sessionLength)
LLM-->>GH: Return title + description + examples
GH->>MAIL: Send HTML email
MAIL-->>You: You get the challenge
You->>Repo: Commit solution in submissions/
Repo->>GH: Triggers submission workflow
GH->>LLM: Request code review (mentorProfile, emailStyle)
LLM-->>GH: Return feedback
GH->>MAIL: Email feedback + PR comment
MAIL-->>You: You get the review
You->>Repo: Drop a letter in letters/to-mentor/
Repo->>GH: Triggers respond-to-letters.yml
GH->>LLM: Read letter + history → generate reply
LLM-->>GH: Reply content
GH->>MAIL: Send mentor’s reply
MAIL-->>You: You get the answer
Note over GH: Weekly/Monthly/Quarterly digests
GH->>Repo: Read notes, submissions, feedback
GH->>LLM: Summarise progress + suggest next steps
GH->>Repo: Write reports + update roadmap.md
Loading
7. Email Types & Tone
Challenge: specs, examples, time estimate
Feedback: line-by-line code review, concept hints, next tasks
Addressing common questions about the TechDeck Academy learning system.
Q: How reliable can AI-generated challenges and feedback really be? Isn't AI sometimes inaccurate or inconsistent?
A: TechDeck Academy uses powerful modern models like Google's Gemini 2.5 Flash, which boasts a large context window (1M tokens) and strong reasoning capabilities. While no AI is perfect (just like human teachers aren't always perfect!), the quality is surprisingly high, often exceeding standard educational experiences. The key is scope. Challenges are designed to be focused, like LeetCode problems or specific project features, minimizing ambiguity. The system is a tool – a highly capable partner for targeted practice and feedback within defined boundaries. We can also easily swap in more powerful models if needed, but Flash 2.5 is proving extremely capable for this use case.
Q: Building and maintaining all those GitHub Actions seems complex and potentially fragile.
A: The GitHub Actions described are primarily for the prototype to demonstrate the workflow. A production version of TechDeck Academy would likely be built using a more robust backend infrastructure, such as Cloudflare Workers with TypeScript. This allows for better scalability, maintainability, error handling, and overall resilience than relying solely on chained GitHub Actions for core functionality.
Q: Won't running AI models constantly for challenges, reviews, and Q&A become expensive?
A: Not really. The pricing for Gemini 2.5 Flash is so low that it's basically a non-issue. Most users won’t even notice the cost, especially at the volumes this system uses. And we don’t require people to bring their own API keys – the backend handles that. The whole thing’s designed to be cheap, fast, and scalable without users needing to micromanage their tokens or API quotas.
Q: This is entirely self-driven. What about learner motivation and discipline? Won't people just drop off?
A: TechDeck Academy is a tool for self-directed learning, much like an online course, a textbook, or remote university study. Motivation and discipline are inherently the user's responsibility in any such system. The Academy provides the structure, the personalized content, the feedback loop, and the progress tracking – elements designed to support motivation. However, the drive to log in, complete challenges, and engage with the material must come from the learner. This is not a weakness of the system, but the nature of self-paced education.
Q: Can the "mentor voice" really work? Won't a "virtual Linus" just be discouraging or give bad feedback?
A: The mentor voice is entirely user-configurable for a reason. Different people respond to different styles. If you find a supportive, casual tone most effective, you set that. If you thrive on direct, critical feedback (even if harsh), you can choose a profile inspired by figures like Linus Torvalds, Theo, or ThePrimeagen. The system provides the capability; the user chooses the experience. If a user selects a harsh mentor and finds it demotivating, they can change the setting. The goal is maximum personalization, empowering users to learn in the style they find most effective and motivating. For some, tough feedback is the motivation.
Q: How do you ensure the initial challenges are correctly calibrated to the user's stated difficulty?
A: Calibration is an iterative process handled through:
Prototyping & Testing: Fine-tuning the prompts used to generate initial challenges based on testing.
Multiple AI Calls: If necessary, the system can make several focused AI calls behind the scenes to refine a challenge description, add examples, or verify scope before sending it to the user.
Adaptive System: The very first challenge might require slight adjustment, but the system is designed to quickly adapt based on the user's performance on subsequent challenges. Success leads to increased difficulty; struggles lead to review and adjustment.
Resilient Architecture: Background workers and scheduled tasks can handle generation, retries, and refinement without impacting the user directly. Errors can be logged and addressed. Similar AI generation tasks (like for TechDeck cards) have proven robust with iterative refinement and simple automated cleanup where needed.
TechDeck Academy is designed to be a practical, evolving learning companion. It leverages powerful, cost-effective AI within a familiar developer workflow, placing control and personalization firmly in the user's hands.
Smart Core Concept: Using GitHub as the platform is sharp. Developers live there; leveraging Git, Markdown, and email means minimal friction. Good insight.
Personalization Engine: The adaptive nature is the killer app here. Adjusting challenges, pace, feedback style (mentor voice), and the roadmap based on actual performance is exactly what personalized learning should be.
Clear Architecture: The config.ts, repo layout, and defined workflows show you've thought through the mechanics. The planned move to a robust backend (like Cloudflare Workers) over relying solely on prototype GitHub Actions for production is the right call for scalability and reliability.
User Control & Cost: Giving users control over mentor voice and using efficient models like Gemini Flash 2.5 with user-provided keys smartly addresses personalization and keeps costs manageable.
The Execution Challenge:
AI Quality is Paramount: While using capable models like Flash 2.5 and scoping challenges helps mitigate risk, the success of this entire system rides on the consistent quality of the AI interactions.
Can it generate truly effective, well-scoped challenges every time?
Can it provide code reviews that offer genuine insight beyond the superficial?
Can it handle Q&A accurately and adapt the learning path intelligently?
This isn't just about the AI working, but working as an effective educator. This requires careful prompt engineering, testing, and refinement. It remains the biggest execution hurdle.
Other Considerations:
Motivation: As with any self-directed tool, user motivation is key. While the personalized feedback and progress tracking help, long-term engagement relies on the user's discipline. Framing this correctly as user responsibility is fair.
TechDeck Tie-in: Integrating with the broader TechDeck ecosystem offers potential network effects and social proof, adding value beyond the standalone tool.
Score: 95/100
Bottom Line:
This is a very strong concept with a well-thought-out architecture and a compelling value proposition. You've addressed major potential pitfalls (cost, implementation complexity, user control) effectively in the design. The potential here is huge. The score reflects that high potential, tempered only by the inherent challenge of executing the AI components to a consistently high educational standard. Nail that AI quality and reliability, and you've got something truly powerful. Great work.
TechDeck Academy is a personalised, AI-powered learning system that connects directly to your TechDeck profile, generating dynamic roadmaps based on your public contributions, social presence, and technical focus.
What Is It?
TechDeck Academy auto-generates a tailored learning plan for each TechDeck user, turning your profile into a launching point for technical growth.
It analyses:
Your GitHub bio and pinned repos
Your most-used programming languages and frameworks
Your social activity on X/Twitter
The type of followers you attract (beginner, intermediate, advanced)
From this, it builds a custom curriculum, including:
Bite-sized coding challenges via GitHub Actions or Resend
Mentor-style feedback (e.g. “Linus Torvalds” tone, or “DHH” style reviews)
Progress digests that track what you’ve improved on and where you’re struggling
Public TechDeck badge stats like Learning Level: 4 Mentor: ThePrimeagen Focus Area: TypeScript + Testing
Why It Works
Learning in isolation is hard.
Learning in public, guided by AI feedback, builds skill and reputation.
With TechDeck Academy:
Users can grow in real time and show it off through their profile cards
Challenges adapt to your improvement rate and topic preferences
Mentorship feels real because you choose the tone you respond best to
The community can see who's actively learning and levelling up
How It Ties Into TechDeck
TechDeck already acts as a collectible, data-rich directory of the tech scene.
Academy builds on that by turning your card into a training badge.
It transforms a static snapshot into a living learning record.
Each card becomes more than a flex — it becomes a tracker of how far you've come.
// Define configuration object with TypeScript types
DEFINE config object:
// Personal information
userEmail: string
githubUsername: string
// Learning preferences
subjectAreas: array of strings
topics: map of subject area to array of topic strings
difficulty: number (1-10)
sessionLength: number (minutes)
// Style preferences
mentorProfile: string (options: "linus", "supportive", "technical")
emailStyle: string (options: "casual", "formal", "technical")
// Schedule
schedule: string (options: "daily", "threePerWeek", "weekly")
// Archive settings
archive: {
enabled: boolean
challengeRetentionDays: number
submissionRetentionDays: number
letterRetentionDays: number
detailedStatsRetentionDays: number
compactSummariesAutomatically: boolean
maxActiveFilesPerType: number
}
EXPORT config
2. Type Definitions (src/types.ts)
// Define types for configuration
DEFINE type MentorProfile as string union: "linus" | "supportive" | "technical"
DEFINE type EmailStyle as string union: "casual" | "formal" | "technical"
DEFINE type Schedule as string union: "daily" | "threePerWeek" | "weekly"
DEFINE type SubjectArea as string union: "programming" | "devops" | "networking" /* etc */
// Define types for challenges, submissions, feedback, etc.
DEFINE interface Challenge:
id: string
title: string
description: string
requirements: array of strings
examples: array of strings
hints?: array of strings
difficulty: number
topics: array of strings
createdAt: string
DEFINE interface Submission:
challengeId: string
content: string
submittedAt: string
filePath: string
DEFINE interface Feedback:
submissionId: string
strengths: array of strings
weaknesses: array of strings
suggestions: array of strings
score: number (0-100)
improvementPath: string
createdAt: string
DEFINE interface StudentProfile:
strengths: array of strings
weaknesses: array of strings
currentSkillLevel: number
recommendedTopics: array of strings
completedChallenges: number
averageScore: number
topicProgress: map of topic to progress number
notes: string
lastUpdated: string
DEFINE interface ArchiveConfig:
enabled: boolean
challengeRetentionDays: number
submissionRetentionDays: number
letterRetentionDays: number
detailedStatsRetentionDays: number
compactSummariesAutomatically: boolean
maxActiveFilesPerType: number
DEFINE interface Stats:
meta: {
lastCompaction: string
version: number
retentionPolicy: {
daily: number
weekly: number
monthly: number
}
}
challenges: {
daily: array of daily challenge stats
weekly: array of weekly challenge stats
monthly: array of monthly challenge stats
}
submissions: similar structure to challenges
topics: map of topic to progress stats
scores: array of score progression stats
activity: activity pattern stats
}
DEFINE interface Summary:
meta: {
lastUpdated: string
activeCount: number
archivedCount: number
}
activeChallenges: array of active challenge summaries
archivedChallenges: array of archived challenge summaries (minimal info)
}
// Add more types as needed
3. Student Profile (student-profile.json)
{
"strengths": [
"Strong understanding of TypeScript types",
"Good code organization"
],
"weaknesses": [
"Needs improvement on error handling",
"Could optimize performance better"
],
"currentSkillLevel": 4.5,
"recommendedTopics": [
"Advanced error handling in TypeScript",
"React performance optimization"
],
"completedChallenges": 12,
"averageScore": 78,
"topicProgress": {
"typescript": 0.7,
"react": 0.4
},
"notes": "Student shows strong progress in type systems but needs more practice with practical applications. Consider focusing next challenges on real-world scenarios.",
"lastUpdated": "2023-05-15T14:30:00Z"
}
4. Mentor Profiles (mentors/*.ts)
Example for Linus mentor profile (mentors/linus.ts):
EXPORT linusProfile object:
name: "Linus Torvalds"
personality: """
Direct, technically rigorous, and uncompromising. Values efficiency, elegance, and clarity in code.
Has low tolerance for sloppy work or unclear thinking. Will point out flaws directly and without
sugar-coating, but feedback is always technically sound and aimed at improvement.
"""
feedbackStyle: """
Brutally honest but substantive technical critique. Does not offer unnecessary praise.
Focuses on:
- Code quality and structure
- Performance considerations
- Naming conventions and clarity
- Design decisions and architecture
- Security and edge cases
May use colorful language when particularly frustrated with poor design choices.
"""
challengeStyle: """
Presents technically interesting problems that require careful thought.
Challenges often focus on system design, performance optimization, or elegant solutions to
complex problems. Expects clear, efficient, and well-documented solutions.
"""
responseStyle: """
Direct and to the point. Doesn't waste time with pleasantries. Answers questions with
technical precision and depth. May point out flaws in the question if it shows confused thinking.
Provides thorough technical explanations when the question deserves it.
"""
exampleFeedback: """
[Example feedback in Linus style]
"""
exampleResponse: """
[Example response to a question in Linus style]
"""
name: Send Challengeon:
# Schedule based on user configschedule:
- cron: "0 9 * * *"# Daily at 9AM
- cron: "0 9 * * 1,3,5"# Mon, Wed, Fri at 9AM
- cron: "0 9 * * 1"# Weekly on Monday at 9AM# Manual triggerworkflow_dispatch:
jobs:
generate-challenge:
runs-on: ubuntu-lateststeps:
- name: Checkout repositoryuses: actions/checkout@v4
- name: Setup Node.jsuses: actions/setup-node@v4with:
node-version: "20"
- name: Install dependenciesrun: npm install
- name: Check schedule configurationrun: | # PSEUDOCODE: # READ config.ts to get schedule preference # DETERMINE if challenge should be sent today based on schedule # IF not scheduled for today: # ECHO "Not scheduled for today" # EXIT workflow
- name: Generate challengerun: | # PSEUDOCODE: # READ student-profile.json for context # READ config.ts for user preferences # READ challenge summary for history context # PREPARE prompt for Gemini API # CALL Gemini API to generate challenge # PARSE response and format as Markdown # SAVE to challenges directory with timestamp ID # UPDATE challenge summary.json # SEND email using Resend APIenv:
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}RESEND_API_KEY: ${{ secrets.RESEND_API_KEY }}
- name: Commit changesrun: | git config --local user.email "[email protected]" git config --local user.name "TechDeck Academy Bot" git add challenges/ challenge-summary.json git commit -m "Generate new challenge" git push
6. Process Submissions Workflow (.github/workflows/process-submissions.yml)
name: Process Submissionson:
push:
paths:
- "submissions/**"workflow_dispatch:
jobs:
process-submission:
runs-on: ubuntu-lateststeps:
- name: Checkout repositoryuses: actions/checkout@v4with:
fetch-depth: 2# To identify new/changed files
- name: Setup Node.jsuses: actions/setup-node@v4with:
node-version: "20"
- name: Install dependenciesrun: npm install
- name: Identify new submissionsrun: | # PSEUDOCODE: # FIND newly added or modified files in submissions directory # STORE list of files to process
- name: Process submissionsrun: | # PSEUDOCODE: # FOR each submission file: # EXTRACT challenge ID from filename # FIND corresponding challenge in challenges directory # READ student-profile.json for context # READ mentor profile based on user configuration # PREPARE prompt for Gemini API # CALL Gemini API to generate feedback # PARSE response and format as Markdown # SAVE to feedback directory # EXTRACT score and update student-profile.json # SEND email using Resend APIenv:
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}RESEND_API_KEY: ${{ secrets.RESEND_API_KEY }}
- name: Commit changesrun: | git config --local user.email "[email protected]" git config --local user.name "TechDeck Academy Bot" git add feedback/ student-profile.json git commit -m "Add feedback for submissions" git push
7. Respond to Letters Workflow (.github/workflows/respond-to-letters.yml)
name: Respond to Letterson:
push:
paths:
- "letters/to-mentor/**"workflow_dispatch:
jobs:
respond-to-letter:
runs-on: ubuntu-lateststeps:
- name: Checkout repositoryuses: actions/checkout@v4with:
fetch-depth: 2# To identify new files
- name: Setup Node.jsuses: actions/setup-node@v4with:
node-version: "20"
- name: Install dependenciesrun: npm install
- name: Identify new lettersrun: | # PSEUDOCODE: # FIND newly added files in letters/to-mentor directory # SORT by creation time (oldest first) # LIMIT to processable batch if too many
- name: Process lettersrun: | # PSEUDOCODE: # FOR each letter file (oldest first): # READ letter content # READ recent correspondence for context # READ student-profile.json for context # READ mentor profile based on user configuration # PREPARE prompt for Gemini API # CALL Gemini API to generate response # PARSE response and format as Markdown # SAVE to letters/from-mentor directory # UPDATE student-profile.json with new insights # SEND email using Resend API # MOVE processed letter to archive directoryenv:
GEMINI_API_KEY: ${{ secrets.GEMINI_API_KEY }}RESEND_API_KEY: ${{ secrets.RESEND_API_KEY }}
- name: Commit changesrun: | git config --local user.email "[email protected]" git config --local user.name "TechDeck Academy Bot" git add letters/ student-profile.json git commit -m "Respond to user letters" git push
name: Rotate Fileson:
schedule:
- cron: "0 1 1 * *"# 1st day of month at 1:00 AMworkflow_dispatch:
inputs:
forceRotation:
description: "Force rotation even if not scheduled"required: falsedefault: falsetype: booleanjobs:
rotate-files:
runs-on: ubuntu-lateststeps:
- name: Checkout repositoryuses: actions/checkout@v4with:
fetch-depth: 0# Full history for accurate dating
- name: Setup Node.jsuses: actions/setup-node@v4with:
node-version: "20"
- name: Install dependenciesrun: npm install
- name: Check if rotation neededid: check-rotationrun: | # PSEUDOCODE: # IF manual trigger with force option: # SET should_rotate=true # ELSE: # CHECK if it's the right time for rotation # CHECK if files have grown too large # SET should_rotate based on checks
- name: Perform file rotationif: steps.check-rotation.outputs.should_rotate == 'true'run: | # PSEUDOCODE: # CREATE archive directories if they don't exist # MOVE old challenges to archive (older than 30 days) # MOVE old submissions to archive # MOVE old feedback to archive # MOVE old letters to archive # COMPACT stats.json through aggregation # COMPACT summary.json by removing details of archived items # GENERATE rotation report
- name: Update summary filesif: steps.check-rotation.outputs.should_rotate == 'true'run: | # PSEUDOCODE: # REGENERATE challenge summary with only active challenges # UPDATE student profile to maintain current context # ENSURE all necessary references are updated
- name: Commit changesif: steps.check-rotation.outputs.should_rotate == 'true'run: | git config --local user.email "[email protected]" git config --local user.name "TechDeck Academy Bot" git add archive/ challenges/ submissions/ feedback/ letters/ *.json git commit -m "Monthly file rotation and archiving" git push
Utility Modules (Pseudocode)
10. AI Utilities (src/utils/ai.ts)
// Import required libraries and types
/*
FUNCTION: generateChallengePrompt
INPUT:
- config: User configuration
- studentProfile: Student profile data
- recentChallenges: Array of recent challenges
STEPS:
- EXTRACT relevant info from config
- BUILD context including student strengths/weaknesses
- INCLUDE summary of recent challenges
- CONSTRUCT prompt for generating appropriately difficult challenge
RETURN: Formatted prompt string
*/
/*
FUNCTION: generateFeedbackPrompt
INPUT:
- challenge: Original challenge
- submission: User submission
- studentProfile: Student profile data
- mentorProfile: Selected mentor profile
STEPS:
- EXTRACT challenge requirements
- INCLUDE submission content
- ADD student background from profile
- ADD mentor persona and feedback style
- SET instructions for scoring and feedback structure
RETURN: Formatted prompt string
*/
/*
FUNCTION: generateLetterResponsePrompt
INPUT:
- question: User question
- correspondence: Recent letter exchanges
- studentProfile: Student profile data
- mentorProfile: Selected mentor profile
STEPS:
- INCLUDE the question
- ADD conversation history for context
- ADD student background from profile
- ADD mentor persona and response style
- SET instructions for response format
RETURN: Formatted prompt string
*/
/*
FUNCTION: generateDigestPrompt
INPUT:
- digestType: weekly/monthly/quarterly
- periodStats: Statistics for the period
- studentProfile: Student profile data
STEPS:
- DETERMINE appropriate period and format
- SUMMARIZE period's achievements and challenges
- INCLUDE relevant statistics and trends
- SET instructions for digest structure
RETURN: Formatted prompt string
*/
/*
FUNCTION: callGeminiAPI
INPUT:
- prompt: Formatted prompt string
- model: Gemini model to use (optional)
- temperature: Creativity setting (optional)
STEPS:
- SETUP API connection with GEMINI_API_KEY
- SEND prompt to API
- HANDLE errors and retries
- PROCESS response
RETURN: Parsed response from Gemini
*/
/*
FUNCTION: parseChallengeResponse
INPUT:
- response: Raw API response
STEPS:
- EXTRACT challenge content
- VALIDATE required sections exist
- FORMAT properly as markdown
- STRUCTURE as Challenge object
RETURN: Challenge object
*/
/*
FUNCTION: parseFeedbackResponse
INPUT:
- response: Raw API response
STEPS:
- EXTRACT feedback content
- EXTRACT score value
- IDENTIFY strengths and weaknesses
- STRUCTURE as Feedback object
RETURN: Feedback object
*/
// Export all functions
11. Email Utilities (src/utils/email.ts)
// Import required libraries and types
/*
FUNCTION: formatChallengeEmail
INPUT:
- challenge: Challenge to send
- emailStyle: User's preferred style
STEPS:
- CREATE subject line based on challenge title
- SELECT intro based on emailStyle
- FORMAT challenge content for email
- ADD instructions for submitting solution
RETURN: Email object with subject and content
*/
/*
FUNCTION: formatFeedbackEmail
INPUT:
- feedback: Feedback to send
- submission: Original submission
- challenge: Original challenge
- emailStyle: User's preferred style
STEPS:
- CREATE subject line referencing challenge
- SELECT intro based on emailStyle
- FORMAT feedback content for email
- ADD score and next steps
RETURN: Email object with subject and content
*/
/*
FUNCTION: formatLetterResponseEmail
INPUT:
- response: Mentor response
- question: Original question
- emailStyle: User's preferred style
STEPS:
- CREATE subject line based on question topic
- SELECT intro based on emailStyle
- FORMAT response content for email
- ADD instructions for follow-up questions
RETURN: Email object with subject and content
*/
/*
FUNCTION: formatDigestEmail
INPUT:
- digest: Digest content
- digestType: weekly/monthly/quarterly
- emailStyle: User's preferred style
STEPS:
- CREATE subject line based on digest type and period
- SELECT intro based on emailStyle
- FORMAT digest content for email
- ADD summary and next steps
RETURN: Email object with subject and content
*/
/*
FUNCTION: sendEmail
INPUT:
- to: Recipient email
- subject: Email subject
- content: Email content
STEPS:
- SETUP Resend API with RESEND_API_KEY
- FORMAT email with proper styling
- SEND email via API
- HANDLE errors and retries
RETURN: Success status and message ID
*/
// Export all functions
// Import required libraries and types
/*
FUNCTION: shouldRotateFiles
INPUT:
- None (uses config for thresholds)
STEPS:
- CHECK if rotation is enabled in config
- CHECK last rotation date from metadata
- CHECK file sizes of stats.json and summary.json
- CHECK number of active files
RETURN: Boolean indicating if rotation should happen
*/
/*
FUNCTION: archiveChallenges
INPUT:
- thresholdDays: Days to keep in active directory
STEPS:
- IDENTIFY challenges older than threshold
- CREATE archive directory for current month
- MOVE old challenges to archive
- UPDATE summary.json to reflect changes
RETURN: Number of files archived
*/
/*
FUNCTION: archiveSubmissions
INPUT:
- thresholdDays: Days to keep in active directory
STEPS:
- IDENTIFY submissions older than threshold
- CREATE archive directory for current month
- MOVE old submissions to archive
- UPDATE relevant tracking files
RETURN: Number of files archived
*/
/*
FUNCTION: archiveFeedback
INPUT:
- thresholdDays: Days to keep in active directory
STEPS:
- IDENTIFY feedback older than threshold
- CREATE archive directory for current month
- MOVE old feedback to archive
- MAINTAIN relationship with submissions
RETURN: Number of files archived
*/
/*
FUNCTION: archiveLetters
INPUT:
- thresholdDays: Days to keep in active directory
STEPS:
- IDENTIFY letters older than threshold
- CREATE archive directory for current month
- MOVE old letters to archive
- KEEP correspondence pairs together
RETURN: Number of files archived
*/
/*
FUNCTION: performMonthlyRotation
INPUT:
- force: Force rotation regardless of thresholds
STEPS:
- CHECK if rotation is needed
- RUN all archive functions
- UPDATE summary files
- GENERATE rotation report
- UPDATE timestamp of last rotation
RETURN: Rotation report with statistics
*/
// Export all functions