Skip to content

Instantly share code, notes, and snippets.

@sabarna9210
Forked from piyushgarg-dev/README.md
Created June 18, 2025 04:52
Show Gist options
  • Save sabarna9210/1562df5f11ea8564312db4d19b43b121 to your computer and use it in GitHub Desktop.
Save sabarna9210/1562df5f11ea8564312db4d19b43b121 to your computer and use it in GitHub Desktop.
Kafka Crash Course

Kafka

Video Link: Apache Kafka Crash Course | What is Kafka?

Prerequisite

Commands

  • Start Zookeper Container and expose PORT 2181.
docker run -p 2181:2181 zookeeper
  • Start Kafka Container, expose PORT 9092 and setup ENV variables.
docker run -p 9092:9092 \
-e KAFKA_ZOOKEEPER_CONNECT=<PRIVATE_IP>:2181 \
-e KAFKA_ADVERTISED_LISTENERS=PLAINTEXT://<PRIVATE_IP>:9092 \
-e KAFKA_OFFSETS_TOPIC_REPLICATION_FACTOR=1 \
confluentinc/cp-kafka

Code

client.js

const { Kafka } = require("kafkajs");

exports.kafka = new Kafka({
  clientId: "my-app",
  brokers: ["<PRIVATE_IP>:9092"],
});

admin.js

const { kafka } = require("./client");

async function init() {
  const admin = kafka.admin();
  console.log("Admin connecting...");
  admin.connect();
  console.log("Adming Connection Success...");

  console.log("Creating Topic [rider-updates]");
  await admin.createTopics({
    topics: [
      {
        topic: "rider-updates",
        numPartitions: 2,
      },
    ],
  });
  console.log("Topic Created Success [rider-updates]");

  console.log("Disconnecting Admin..");
  await admin.disconnect();
}

init();

producer.js

const { kafka } = require("./client");
const readline = require("readline");

const rl = readline.createInterface({
  input: process.stdin,
  output: process.stdout,
});

async function init() {
  const producer = kafka.producer();

  console.log("Connecting Producer");
  await producer.connect();
  console.log("Producer Connected Successfully");

  rl.setPrompt("> ");
  rl.prompt();

  rl.on("line", async function (line) {
    const [riderName, location] = line.split(" ");
    await producer.send({
      topic: "rider-updates",
      messages: [
        {
          partition: location.toLowerCase() === "north" ? 0 : 1,
          key: "location-update",
          value: JSON.stringify({ name: riderName, location }),
        },
      ],
    });
  }).on("close", async () => {
    await producer.disconnect();
  });
}

init();

consumer.js

const { kafka } = require("./client");
const group = process.argv[2];

async function init() {
  const consumer = kafka.consumer({ groupId: group });
  await consumer.connect();

  await consumer.subscribe({ topics: ["rider-updates"], fromBeginning: true });

  await consumer.run({
    eachMessage: async ({ topic, partition, message, heartbeat, pause }) => {
      console.log(
        `${group}: [${topic}]: PART:${partition}:`,
        message.value.toString()
      );
    },
  });
}

init();

Running Locally

  • Run Multiple Consumers
node consumer.js <GROUP_NAME>
  • Create Producer
node producer.js
> tony south
> tony north
@sabarna9210
Copy link
Author

PUB SUB architecture

@sabarna9210
Copy link
Author

image

@sabarna9210
Copy link
Author

image
image
Auto Scaling || Agar 4 ke baad koi 5th aata hai to wo khaali baitha rahega

@sabarna9210
Copy link
Author

One consumer can consume multiple partitions .
One partition can not be consumed by multiple consumers.

@sabarna9210
Copy link
Author

image
Group level pe self-balancing hota hai.

@sabarna9210
Copy link
Author

Apache Kafka is a hybrid system — it supports both pub-sub and queue-like semantics depending on how you use it.

1️⃣ Kafka as a Pub/Sub (Publish-Subscribe) System
In a classic pub-sub model:
One or more producers (publishers) send messages to a topic.
Multiple consumers (subscribers) can subscribe to that topic.
Every subscriber gets a copy of every message published to the topic.
In Kafka:
A topic can have multiple consumer groups.
Each consumer group acts like an independent subscriber.
All consumers in a group coordinate to process messages once per group (i.e., one consumer per group will process a given message).
But if two different groups subscribe to the same topic, both groups will get all messages → pub-sub pattern.
2️⃣ Kafka as a Messaging-Queue
In a traditional queue:
Messages are stored in order.
Each message is processed by exactly one consumer.
Multiple consumers can work together (competing consumers), but each message goes to one of them.
In Kafka:
Within a consumer group, multiple consumers share the work.
Kafka partitions the topic, and each partition is consumed by only one consumer in the group.
Effectively, each message is processed by only one consumer in the group → queue-like behavior.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment