Skip to content

Instantly share code, notes, and snippets.

View eonist's full-sized avatar
🎯
Focusing

André J eonist

🎯
Focusing
View GitHub Profile
@eonist
eonist / iPhone FM radio.md
Created August 28, 2025 11:44
iPhone FM radio

No — iPhones do not include a usable, disabled FM radio that can simply be switched on. Older models had combo chips with FM capability, but Apple never wired the antenna/tuning/audio paths, and newer models (iPhone 7 and later) lack FM support entirely according to Apple.[3][5][6][7]

Why the confusion

Some early iPhones used Broadcom/Qualcomm combo chips that included an FM receiver core as part of the package, which led to the misconception that FM was merely “turned off.” However, the FM-related pins weren’t connected inside the phone—there was no antenna input, no tuning lines, and no audio routing—so the device never functioned as an FM receiver.[6][8][3]

After iPhone 7

Apple has stated that iPhone 7, iPhone 8, iPhone X and later do not have FM radio capability in hardware and therefore cannot enable it via software. This position was reiterated during public calls to “turn on” FM for emergency use, emphasizing that the necessary hardware simply isn’t present in these models.[5][7][6]

@eonist
eonist / 1st_principles_med_law_AI.md
Created August 26, 2025 22:58
Please use first principles to argue why people starting to study law or medicine today won't matter, because AI will make these jobs obsolete by the time they graduate.

Please use first principles to argue why people starting to study law or medicine today won't matter, because AI will make these jobs obsolete by the time they graduate.

A First Principles Analysis: Why Starting Law or Medical School Today May Be Strategically Questionable

Let me break this down using fundamental reasoning about AI capabilities, educational timelines, and economic forces.

The Timeline Problem

Law Students Starting Today:

  • Graduate in 2028 (3-year program)
@eonist
eonist / mcp file upload download problem.md
Created August 24, 2025 11:27
mcp file upload download problem

Quick assessment

  • Partly correct: installation friction made remote MCP servers attractive, but Claude Desktop now supports one‑click local MCP installations via Desktop Extensions, which reduces the need to rely on remote URLs only.[1]
  • Not correct as stated that clients “can’t” send/receive files: Claude Desktop can read, write, move, and search local files via the official filesystem MCP server, so file transfer is possible when a local MCP is connected.[2][3]
  • Protocol nuance: MCP “resources” are a read API that can return text or binary blobs; writes and uploads are intentionally handled via server‑defined tools rather than a built‑in file‑upload primitive, so cross‑client uploads aren’t standardized at the protocol level today.[4][5]
  • Viable today: many community and vendor MCP servers already implement upload/download patterns or bridges (e.g., generic “upload-file” servers and file‑ops integrations), so CRUD‑style workflows are feasible without waiting for protocol changes.[6][7][8]
  • Your curr
@eonist
eonist / parroting.md
Created August 22, 2025 12:09
parroting.md

Do AI agents “suffer from parroting”?

What “parroting” means in AI

  • The “stochastic parrot” critique argues large language models can mimic fluent language without true understanding, remixing patterns from training data rather than grounding statements in meaning.[1][2]
  • This manifests as outputs that sound confident and coherent but may be shallow, biased, or wrong because they are driven by statistical association rather than verified knowledge.[3][2][1]

Two related failure modes

  • Hallucination/confabulation: models generate plausible but incorrect or nonsensical content; this is inherent to next-token prediction and imperfect generative modeling, and is influenced by decoding choices like temperature/top‑k sampling.[4][3]
  • Data parroting/memorization: models reproduce training content or distinctive elements (e.g., logos) too closely, reflecting overfitting and raising trust, IP, or privacy concerns.[5]
@eonist
eonist / Prefix mcp tools.md
Created August 21, 2025 21:39
Prefix mcp tools.md

MCP tool naming: should they be prefixed with the service name?

Short answer: Prefixing tools with a clear, unique namespace (often the service or server name) is a good practice for clarity and collision avoidance, but it isn’t required by the MCP spec. If tools from multiple servers may coexist, using a prefix or consistent namespace helps both humans and models pick the right tool.

What the ecosystem and docs suggest

  • The MCP spec and docs describe tools as server-exposed functions but do not mandate a prefix; effectiveness hinges more on clear names and detailed descriptions than on a specific naming style.[1][2]
  • Practitioners highlight ambiguity problems with generic tool names and propose namespacing (e.g., a consistent prefix) to ensure uniqueness across servers and clearer intent mapping by LLMs.[3][4]
  • Several best-practice guides recommend sticking to tokenization-friendly, consistent formats (prefer snake_case; avoid spaces, dots, brackets), which improves discoverability and tool-calli
@eonist
eonist / mcp_info_tool.md
Created August 21, 2025 15:12
mcp_info_tool.md

Short answer

  • There isn’t a standardized “usage_best_practice” command in the MCP spec that tells an LLM how to use an entire MCP server as a whole. The protocol defines primitives (tools, resources, prompts, progress, cancellation, logging, security), but no reserved command dedicated to server-wide best-practice guidance.[1][2][3]
  • Some community guidance recommends providing an explicit “info” or similar tool that returns version, health, configuration, and usage guidance for a whole MCP server, but this is a convention, not a spec-mandated standard.[4]
  • Enterprise and vendor guides discuss broader MCP “best practices” (architecture, security, latency, semantic layers), but these are deployment patterns, not a protocol-level “usage_best_practice” command.[5][6]

What the MCP spec covers (and doesn’t)

  • The current MCP specification (e.g., 2025-06-18) defines capabilities like configuration, progress tracking, cancellation, error reporting, logging, and a strong section on security/user consent,
@eonist
eonist / most popular slack bots.md
Created August 20, 2025 16:18
most popular slack bots
@eonist
eonist / block kit.md
Created August 20, 2025 15:59
block kit.md

Yes, the Slack Bot API offers extensive UI elements beyond text through a framework called Block Kit, which provides rich formatting and interactive components for creating sophisticated user interfaces within Slack messages.[2]

Block Kit Framework

Block Kit is Slack's UI framework that allows you to build complete workflows right inside Slack that work seamlessly across desktop and mobile devices without additional code or configuration. You build messages from individual or groups of pre-defined blocks including text, thumbnail images, dividers, and interactive elements like buttons, dropdown menus, and date pickers.[2]

Available UI Elements

Visual Components:

  • Text sections with markdown formatting support
@eonist
eonist / is llm compute cost going down? show timeline from 2022 until now.md
Created August 19, 2025 15:05
is llm compute cost going down? show timeline from 2022 until now

Is LLM compute cost going down? Timeline (2022–Aug 2025)

Short answer

  • Yes: the cost to run LLMs (inference) has fallen dramatically since 2022—by orders of magnitude for equivalent quality levels—while training costs for cutting-edge frontier models have generally increased.[1][2][3][4][5][6]

Timeline of key shifts (2022–Aug 2025)

2022

  • Early public benchmarks and pricing baselines: models achieving mid-tier MMLU performance (e.g., GPT‑3.5-level) were priced at roughly $20 per million tokens in late 2022, setting an initial reference point for subsequent price declines.[2]
# Using MCP to run two coder agents in a turn-by-turn collaboration
Short answer: Yes—this is possible with systems built on the Model Context Protocol (MCP). Several community and vendor implementations demonstrate multi-agent setups where two (or more) specialized coder agents collaborate either turn-by-turn (sequential) or in parallel, with a coordinator/hub mediating messages and shared context for handoffs and arbitration. MCP’s recent evolution supports richer interaction patterns that make agent-to-agent coordination feasible, including multi-turn task flows and indirect or direct agent messaging via a shared hub/context store.[1][6][9]
## What “turn-by-turn” looks like with MCP
- A shared MCP hub (or server) acts as the message bus and single source of truth; agents read/write tasks, status, and artifacts there.[1]
- Agents coordinate indirectly through the shared context (default), or occasionally directly, while still logging outcomes back to the hub for traceability.[1]
- A planner/coordinator ca