🚨 Supply Chain Attacks: The Hidden Risk in Your Dependencies

Recently, a widely used library — Axios — was compromised.

For a short window, running npm install could pull malicious code designed to steal credentials. Incidents like this have even been linked to state-sponsored groups, including North Korea.

That’s a supply chain attack.

Related YT video:


🧠 What is a Supply Chain Attack?

A supply chain attack is when attackers don’t hack you directly…

They compromise something you trust.

  • A dependency
  • A library
  • A tool in your pipeline

Instead of breaking your code, they poison your dependencies.

And because modern apps rely on hundreds of packages…
this scales extremely well.


🔥 Why This Works

We trust dependencies too much.

  • We install updates blindly
  • We use “latest” versions
  • We assume registries are safe

But in reality:

Installing a dependency = executing someone else’s code


🛡️ How to Protect Yourself

Let’s go straight to what actually works.


📌 1. Version Pinning

Don’t use floating versions.

Bad:

pip install requests
npm install lodash

Good:

requests==2.31.0
lodash@4.17.21

This ensures you always install the exact same version.


🔒 2. Lockfiles + Hash Pinning

A lockfile records the exact versions of all your dependencies — including indirect ones.

Examples:

  • package-lock.json
  • poetry.lock
  • uv.lock

Think of it as a snapshot of your dependency tree.

Instead of:

“install lodash”

You’re saying:

“install this exact version, plus all its exact dependencies”


🔐 Hash Pinning

Some lockfiles also include cryptographic hashes.

This means:

  • The version must match ✅
  • The actual file must match ✅

If something is tampered with → install fails.

Lockfiles = reproducibility
Hashes = integrity


⏳ 3. Avoid Fresh Versions

A simple but powerful rule:

👉 Don’t install newly published versions immediately

Why?

  • Malicious releases are often caught quickly
  • Early adopters take the risk

Waiting even a few days can make a big difference.


🔍 4. Continuous Scanning with SonarQube

Use tools like SonarQube to analyze your codebase.

They help detect:

  • Vulnerable dependencies
  • Security issues
  • Risky patterns

But remember: they won’t catch everything.


🧱 5. Reduce Dependencies

The fewer dependencies you have…

…the fewer things can betray you.


🧠 Mental Model

Dependencies are not just libraries.

They are:

Remote code execution with a nice API


🚀 Final Thoughts

Supply chain attacks are growing because they scale:

  • Attack one package
  • Impact thousands of developers

To reduce your risk:

  • Pin versions
  • Use lockfiles + hashes
  • Don’t blindly trust “latest”
  • Be cautious with fresh releases

🔗 References

I Built a Pokémon Game. Here’s What I Learned About LangChain and LangGraph.

I wanted to learn LangChain and LangGraph properly — not through dry tutorials, but by building something fun. So I built a text-based Pokémon RPG where an LLM narrates your adventure, generates wild encounters, and drives the story, while Python handles the actual game mechanics.

The full source code is a single main.py file. In this post, I’ll walk through the key concepts and point to exactly where they show up in the code.

📦 Full source on GitHub

I also have a YouTube video about this


The Big Idea: LLM for Creativity, Code for Logic

The most important design decision was the split of responsibilities. The LLM handles things it’s good at — narration, personality, generating Pokémon names and descriptions. Python handles things that need to be deterministic — damage formulas, catch rates, HP tracking. LangGraph ties them together into a state machine that is the game loop.


1. Connecting to the LLM

LangChain abstracts LLM providers behind a unified interface. Whether you use OpenAI, Anthropic, or a self-hosted Ollama server, the API is the same. I’m running Qwen 3.5 on a remote Ollama instance:

llm = ChatOllama(
model="qwen3.5:35b-a3b",
base_url="http://127.0.0.1:11434",
max_tokens=4096,
temperature=0.7,
)

This single object gets reused everywhere — for narration, Pokémon generation, and Professor Oak’s dialogue. Swap the model or URL, and the entire game runs on a different LLM with zero code changes.


2. Prompt Templates: Giving the LLM a Role

Raw strings work, but templates are reusable. The narrator chain uses a SystemMessage to set the persona, a MessagesPlaceholder for conversation history, and variables for dynamic context:

narrator = (
ChatPromptTemplate.from_messages([
("system", """You are the narrator of a Pokémon text adventure.
Player: {player_name} | Location: {location} | Badges: {badge_count}
Team: {team_str} ..."""),
MessagesPlaceholder("history"),
("human", "{input}"),
])
| llm
)

The | pipe is LCEL (LangChain Expression Language) — it composes the template and the LLM into a single callable chain. One .invoke() fills the template, sends it to the model, and returns the response.


3. Structured Output: Pokémon as Data, Not Prose

This was the moment it clicked for me. Instead of parsing free text with regex, you define a Pydantic model and LangChain forces the LLM to return valid, typed data:

class WildPokemonSchema(BaseModel):
name: str
type: str
level: int = Field(ge=2, le=50)
hp: int = Field(ge=20, le=120)
attack: int = Field(ge=10, le=60)
defense: int = Field(ge=10, le=50)
encounter_generator = llm.with_structured_output(WildPokemonSchema)

Now, when I call encounter_generator.invoke("Generate a wild Pokémon for Viridian Forest"), I get back an actual WildPokemonSchema object with guaranteed fields and value ranges — not a blob of text I have to hope is parseable.


4. LangGraph: The Game Is a State Machine

This is where things get interesting. A Pokémon game isn’t a linear prompt → response flow. It’s a loop with branches: explore → maybe encounter → fight or catch or run → check outcome → loop back. That’s a state machine, and that’s exactly what LangGraph gives you.

First, you define the state — everything the game needs to track:

class GameState(TypedDict):
messages: Annotated[list, add_messages]
player_name: str
location: str
pokemon_team: list[dict]
wild_pokemon: dict | None
badge_count: int
game_phase: str
turn_count: int

The Annotated[list, add_messages] part is a reducer — it tells LangGraph to append new messages to the list instead of replacing it. This is how conversation history accumulates automatically.

Then you write nodes — plain functions that receive the state and return partial updates:

def explore_node(state: GameState) -> dict:
# ... call the narrator LLM, return new messages
return {"messages": [...], "game_phase": "exploration"}
def battle_node(state: GameState) -> dict:
# ... handle fight/catch/run logic
return {"messages": [...], "wild_pokemon": updated, "game_phase": "battle"}

You only return the keys that changed. LangGraph handles merging.


5. Conditional Edges: Branching Paths

The real power of the graph is dynamic routing. After exploring, should the player encounter a wild Pokémon or keep walking? After a battle turn, did they win, lose, or is the fight still going?

def route_after_battle(state: GameState) -> str:
phase = state.get("game_phase", "")
if phase == "exploration":
return "explore" # won the fight
if phase == "game_over":
return "game_over" # your Pokémon fainted
return "battle" # fight continues
graph.add_conditional_edges("battle", route_after_battle,
{"explore": "explore", "game_over": "game_over", "battle": "battle"})

The routing function reads the state and returns a string key. The mapping dict sends the graph to the right node. No if/else spaghetti — the graph structure is the game logic.


6. interrupt(): Waiting for the Player

The most game-changing feature (pun intended). interrupt() pauses the entire graph and surfaces a prompt to the player. When they respond, execution resumes exactly where it left off:

# Inside battle_node:
action = interrupt(
f"⚔️ BATTLE — Turn {state.get('turn_count', 0) + 1}\n"
f" {p['name']}: {p['hp']}/{p['max_hp']} HP\n"
f" Wild {w['name']}: {w['hp']}/{w['max_hp']} HP\n"
f" Your moves: [{moves_str}]\n"
f" Or: [catch] / [run]"
)
# 'action' now contains whatever the player typed

For this to work, you need a checkpointer — it saves the graph’s state between pauses:

checkpointer = MemorySaver()
game = graph.compile(checkpointer=checkpointer)
# Each session gets a thread_id (like a save file)
config = {"configurable": {"thread_id": f"game-{name}"}}

The game loop then checks for interrupts and resumes with the player’s input:

snapshot = game.get_state(config)
if snapshot.tasks and snapshot.tasks[0].interrupts:
prompt = snapshot.tasks[0].interrupts[0].value
player_input = input("> ")
result = game.invoke(Command(resume=player_input), config)

The Final Graph

Here’s the complete game flow:

        ┌──────────┐
        │  START    │
        └────┬─────┘
             │
        ┌────▼─────┐
        │  intro    │  ← Professor Oak
        └────┬─────┘
             │
        ┌────▼─────┐ ◄──────────────────────────┐
        │ explore   │  ← waits for player input   │
        └────┬─────┘                              │
             │                                    │
      ┌──────┴──────┐                             │
      ▼             ▼                             │
 ┌────────┐  ┌──────────────┐                     │
 │  heal  │  │encounter_chk │                     │
 └───┬────┘  └──────┬───────┘                     │
     │          ┌───┴────┐                        │
     │        none    encounter                   │
     │          │        │                        │
     │          │ ┌──────▼──────┐                  │
     │          │ │   battle    │◄──┐             │
     │          │ │  (interrupt)│   │ ongoing     │
     │          │ └──────┬──────┘   │             │
     │          │   ┌────┼────┐    │             │
     │          │  win  loss  loop─┘             │
     │          │   │    │                        │
     └──────────┴───┴────┼────────────────────────┘
                         │
                  ┌──────▼──────┐
                  │  game_over  │ → END
                  └─────────────┘

Key Takeaways

Split responsibilities wisely. LLMs are great at generating creative text and structured data. They’re terrible at math and consistent state tracking. Let each do what it’s good at.

Structured output is underrated. .with_structured_output() turned the LLM from a chatbot into a game asset generator. No parsing, no praying — just typed Python objects.

LangGraph thinks in graphs, not chains. Once I stopped thinking “prompt → response” and started thinking “state → node → conditional edge → next state,” the game architecture fell into place naturally.

interrupt() makes real interactivity possible. Without it, you’re stuck building hacky input loops around the LLM. With it, the graph itself manages the pause/resume cycle.


The full game is a single main.py — about 300 lines of Python. Clone it, point it at any Ollama-compatible server, and start catching Pokémon.

📦 Source code on GitHub

Is coding over? My prediction…

Here’s a summary of the related video I uploaded to my YouTube channel:


We Are About to Let AI Write 90% of Our Code

Hi friends 👋

In the last two months, something has changed.

And I don’t mean incrementally. I mean, fundamentally.

If you’ve tried using Claude Code with Opus — or accessed the Opus model through another provider — you can feel it. This is no longer autocomplete on steroids. This is something different.

This is real.
And it’s starting to work really well.

My Prediction

I’m not sure you’ll agree with me, but here it goes:

Within the next 2–3 years, 90% of the code we ship will be AI-generated.

Our job as developers will shift dramatically.

Instead of writing most of the code ourselves, we’ll focus on:

  • Providing high-quality context
  • Managing complexity and moving pieces
  • Handling edge cases AI can’t infer
  • Connecting systems
  • Making architectural decisions
  • Ensuring business value is delivered

In short, we’ll move from being writers of code to being managers of AI agents.

Almost like engineering managers — but for agents.

From Autocomplete to Agents

The early days of AI in development were about better tab-complete.

That era is over.

It’s time to “leave the seat” to AI agents — or even multiple agents working together — and step into a different role:

  • Making sure priorities are correct
  • Deciding which models to use and when
  • Managing cost (because yes, this can get expensive)
  • Ensuring output quality
  • Validating real-world impact

This year, I think we’ll learn a lot about how to be efficient in this new paradigm.

If You Don’t Believe It…

Try Claude Code with Opus.

That’s my honest recommendation. It’s what I’ve been using over the past two weeks, and it genuinely opened my eyes.

Other models can work too — Codex latest versions are solid — but not all models feel the same. Some are useful, but don’t yet deliver that “this changes everything” moment.

Opus does.

New Challenges Ahead

Of course, this shift brings new problems:

What happens to pull requests?

If most of the code is AI-generated, what exactly are we reviewing?

What about knowledge depth?

If you’re not writing the code, are you really understanding it?

This is critical.

You don’t want to be on call at 3AM, debugging production, and only knowing how to “prompt better.”

We are not at the point where programming becomes assembly and English becomes the new C.

We are far from that.

You still need to understand what’s happening. Deeply.

The 90/10 Rule

I think we’ll see something like a Pareto distribution:

  • 90% of code: AI-generated
  • 10% of code: Human-crafted

That 10% will matter a lot.

It will involve:

  • Complex context
  • Architectural glue
  • Edge cases
  • Critical logic
  • Irreducible human judgment

Development isn’t disappearing.

But it is transforming.

Exciting Times (Depending on Why You’re Here)

If you love building, solving problems, designing systems — this is an incredibly exciting time.

If what you loved most was physically typing every line of code yourself…

That part is changing.


I’m optimistic.

I think software development is evolving, not dying.

But the role of the developer?
That’s definitely being rewritten.

Let me know what you think.

See you 👋

Free Auto Silence Remover / Slicer – Remove Silence from Videos Automatically

This post is based on the youtube video I uploaded:

🔗 Related links

🔧 Source code (GitHub): https://github.com/ivmos/SilenceRemover (one of the available repos)

🌐 Try it online

https://silenceslicer.com/ (Jerry Li’s hosted app)

https://silence-remove.vercel.app (vercel deployment example)


Removing Silences from Videos with a Free Open-Source Tool (Local + Vercel Deployment)

Hi friends 👋
In this post, I want to show you the free, open-source tool I currently use to remove silences from my videos. We’ll walk through how it works locally, explore its UI and internals, and finally deploy it to Vercel so you can run it as a hosted solution.

If you create YouTube videos, podcasts, or tutorials, this tool can save you a lot of editing time.


Running the Project Locally

I’m starting directly from the repository. This is a Node.js project, and running it locally is straightforward:

yarn run dev

Once executed, the app runs on a local port and spins up a development server. The local UI is slightly different from the currently hosted version, which makes it ideal for experimentation and debugging.


Analyzing a Video (Silence Detection)

After the app is running, you can simply drag and drop a video file into the interface. I tested it with my previous video about Moises.ai, and the analysis was surprisingly fast.

To better understand what’s happening behind the scenes, I opened the developer tools. You can clearly see FFmpeg being loaded and network activity kicking in while the analysis runs.

Tweaking Detection Parameters

One of the best things about this tool is how configurable it is:

  • Mean volume – controls how quiet a segment must be to count as silence
  • Minimum silence duration – adjusts how long silence must last to be removed

After tweaking these values and clicking Analyze again, you’ll notice different results. Once finished, the app tells you the new duration of the video after silence removal.


Exporting the Result

When you’re happy with the analysis, you can:

  • Export the processed video
  • Export the timeline (useful for further editing)

At this point, everything is handled locally through FFmpeg, without uploading your video anywhere — a big plus for privacy.


Working with the Timeline UI

The UI is honestly one of the highlights of this project.

You get a visual timeline where silence regions are clearly marked. From here you can:

  • Add zones manually
    Click Zone Add and select the part you want to include or modify.
  • Remove zones manually
    Click Zone Delete and simply select the sound you want to remove.

You can immediately play back the result to verify that everything works as expected — and it does, really well.


Deploying to Vercel

Next, I wanted a hosted version, so I deployed the project to Vercel.

Steps:

  1. Go to your Vercel dashboard
  2. Click Import Project
  3. Vercel detects it as a Node.js project automatically
  4. Deploy with default settings

At first, I ran into a deployment error. After copying the error message into ChatGPT and applying a small fix, the deployment worked perfectly.

Once deployed, the app behaves exactly the same as the local version — but now it’s available online under my own Vercel URL.


Quick Look at the Codebase

Since we had some extra time, I explored the code to understand how silence removal actually works.

Tech Stack Overview

  • Node.js
  • UI built with a React-like framework
  • FFmpeg running in the browser
  • WaveSurfer.js for waveform visualization

Key Components

  • VideoEditor component
  • Timeline / waveform component
  • Silence detection logic in the video renderer

How Silence Detection Works

The core logic happens in a helper responsible for silence analysis:

  • It uses WaveSurfer.js with the Regions plugin
  • Regions are automatically extracted based on silence
  • The analyzeRegions helper:
    • Extracts regions
    • Filters them by silence thresholds
    • Produces the final list of segments to keep

FFmpeg is then called with the correct parameters to stitch together only the non-silent parts.

Simple, elegant, and very effective.


Final Thoughts

This tool is a great example of how powerful open-source projects can be when combined with modern web tech. It’s fast, private, configurable, and easy to deploy.

If you edit videos regularly, I highly recommend checking it out and even self-hosting it like I did.

See you in the next video 👋

Books I read in 2025

This is a summary/transcription of this related video I made:


The Books I Read in 2025 (and Why I Recommend Them)

2025 is coming to an end, and for the first time on this channel, I wanted to talk about books. Reading has been an important part of my year, and I’ve gone through a mix of science fiction, music autobiographies, self-reflection, comedy, and technology. Here’s a rundown of the books I read in 2025 and why I think each of them is worth your time.

Exhalation – Ted Chiang

I’ll start with Exhalation by Ted Chiang. This is technically a science-fiction book, but honestly, it feels more like a philosophy book disguised as sci-fi. Each story explores deep ideas about consciousness, time, free will, and what it means to be human. If you enjoy science fiction that makes you stop and think rather than just entertain you, this one is highly recommended.

Eric Clapton: The Autobiography

Next is Eric Clapton: The Autobiography. I really liked this book because it’s not just about music—although if you love guitar and blues, that part is obviously great. It also dives deeply into addiction, personal struggles, and inner demons. If you’ve dealt with these issues yourself, or think you might someday, this book can be surprisingly helpful. It’s honestly incredible that Clapton is still alive and still rocking after everything he’s been through.

Stolen Focus – Johann Hari

Another book I read was Stolen Focus by Johann Hari. This is a self-help book, but in a very grounded way. If you often feel distracted, struggle to focus for long periods, or find yourself trapped in doom-scrolling on TikTok or similar platforms, this book is for you. It explores how modern technology affects our attention and why this is becoming a serious problem—not just for kids, but for everyone. I personally found it very insightful.

The Music Lesson – Victor Wooten

The Music Lesson by Victor Wooten is another standout. Victor Wooten is a legendary bassist, but this book isn’t really about music technique. It’s about life. Rhythm, listening, timing, and feel are all used as metaphors for how we live. Even if you’re not deeply into music, there’s a lot here that connects directly to everyday life.

Masters of Doom – David Kushner

This one is closer to the typical topic of my channel. Masters of Doom by David Kushner tells the story of John Carmack and John Romero, the creators of id Software. It’s a fascinating mix of hacking culture, creativity, obsession, and extremely hard work. The “work hard, party hard” mentality is very present. If you’re a developer or work in tech, this book is incredibly inspiring and motivating.

Into the Void – Geezer Butler

Into the Void is the autobiography of Geezer Butler from Black Sabbath. He talks extensively about his life, the band, and the people around them—Ozzy Osbourne and many others. I can only recommend this book if you’re really a fan of Black Sabbath or that style of music, which I am. Otherwise, it might not be for everyone.

Project Hail Mary – Andy Weir

Project Hail Mary by Andy Weir was one of the highlights of the year. It’s an excellent science-fiction novel with humor, emotional moments, and great pacing. I read it really, really fast. I’ve also heard there’s a movie adaptation coming, which doesn’t surprise me at all. If you like sci-fi that’s smart but also fun and emotional, this is an easy recommendation.

A Comedy Novel – Tom Sharpe

I also read a book by Tom Sharpe. It wasn’t my first time—I think I’ve read it two or three times already—and I still love it. His style of English comedy is absurd, sharp, and full of unexpected twists. I actually read this one during my wedding, which is quite ironic. If you enjoy British humor, Tom Sharpe is always a safe bet.

AI Engineering – Chip Huyen

The last book is AI Engineering by Chip Huyen. This is a fairly large book, but it’s not overly deep in every section. Instead, it works very well as an introduction for developers who want to understand how real AI systems are built. It’s practical, grounded, and avoids hype. The book focuses on how AI systems actually work, the trade-offs involved, and real-world constraints. Some chapters go deeper, while others stay high-level. Overall, it reflects what “AI engineering” has become—basically the new full-stack buzzword, but with real substance behind it.

Final Thoughts

This year I read quite a lot, especially about artificial intelligence and practical topics, but also about life, focus, creativity, and music. I’m genuinely happy about that, and I hope I’ll read just as much (or more) next year.

Reading is a great way to use your time. Instead of jumping from one small attention hole to another, reading forces you to focus. And as I learned—ironically—from Stolen Focus, the more time you spend truly focused on something, the happier you tend to be.

If you read any of these books this year, or plan to, let me know. And if you have recommendations for 2026, I’m always open to them.

Network Tools Inside a POD: Exploring /dev/tcp and BusyBox

When working with containers, especially in Kubernetes, it’s common to troubleshoot network issues or communicate with other services from within a POD. For most engineers, the go-to tools for these tasks are often BusyBox utilities like telnet, curl, nc or wget. However, there are scenarios where BusyBox isn’t installed in the POD, and you find yourself without these essential networking tools.

Related video:

But don’t worry—if your POD has bash installed, there’s a lesser-known method you can use: /dev/tcp. This built-in feature of bash allows you to perform basic network communication tasks directly from the command line.

The Role of BusyBox in a POD

BusyBox is a popular suite of Unix utilities that provides stripped-down versions of common commands. It’s widely used in containers because of its minimal footprint. With BusyBox, you get access to a variety of tools, including:

  • telnet for simple network connections,
  • wget and curl for making HTTP requests,
  • nslookup or dig for DNS lookups.

However, if your POD image is extremely minimal or designed for a specific purpose, BusyBox might not be included. This leaves you without the usual arsenal of network troubleshooting tools.

Enter /dev/tcp: A Hidden Bash Gem

If you’re stuck without BusyBox, and you have access to bash, you can still perform network operations using the special file /dev/tcp. This feature is available in bash versions 2.04 and later, and it provides a way to make TCP and UDP connections directly from the shell.

How /dev/tcp Works

The /dev/tcp file isn’t a real file on disk—rather, it’s a special bash feature that lets you open a network connection and send or receive data. The syntax is straightforward:

cat < /dev/tcp/<hostname>/<port>

This command attempts to read from a TCP connection to the specified hostname and port. You can also send data by redirecting output to /dev/tcp:





echo -e "GET / HTTP/1.1\nhost: <hostname>\n\n" > /dev/tcp/<hostname>/<port>

Examples of Using /dev/tcp

Let’s explore a few practical examples of using /dev/tcp inside a POD:

1. Checking if a Port is Open

You can use /dev/tcp to check if a specific port is open on a target host. This is similar to what you might do with telnet or nc:

if echo > /dev/tcp/google.com/80; then
  echo "Port 80 is open"
else
  echo "Port 80 is closed or unreachable"
fi

This command attempts to send data to Google’s HTTP port (80). If the port is open, the echo command will succeed, otherwise, it will fail.

2. Performing a Simple HTTP GET Request

Without curl or wget, you can still make HTTP requests using /dev/tcp:

exec 3<>/dev/tcp/example.com/80
echo -e "GET / HTTP/1.1\nHost: example.com\nConnection: close\n\n" >&3
cat <&3
exec 3>&-

Here, the exec 3<>/dev/tcp/example.com/80 command opens a TCP connection to example.com on port 80 and assigns file descriptor 3 to it. The echo command sends an HTTP GET request to the server, and the cat command reads and displays the response.

3. Basic DNS Query

You can use /dev/udp (a similar feature for UDP) to perform a simple DNS query:

echo -ne "\x01\x00\x00\x01\x00\x00\x00\x00\x00\x00\x07example\x03com\x00\x00\x01\x00\x01" > /dev/udp/8.8.8.8/53

This sends a raw DNS query to Google’s DNS server (8.8.8.8) asking for the IP address of example.com. Interpreting the response requires more work, but this example shows how you can interact with network services at a low level.

Conclusion

While BusyBox is a fantastic toolset for handling networking tasks inside a POD, it isn’t always available. In such cases, knowing how to use /dev/tcp can be a lifesaver. This built-in feature of bash allows you to perform basic network operations, such as checking open ports or making simple HTTP requests, without relying on external utilities.

Remember, though, that /dev/tcp is not as user-friendly or powerful as tools like curl or wget. It’s best used as a fallback option when you’re in a minimal environment and need to troubleshoot connectivity issues.

By mastering these lesser-known tools, you can be better prepared for any situation that arises within your Kubernetes environment.

Exploring Telnet: The Retro Tech Still Offering Fun Surprises

In the fast-paced world of modern computing, where sleek interfaces and seamless connectivity reign supreme, it’s easy to forget about the old tools that paved the way for today’s digital marvels. One such tool is Telnet. Though it may seem antiquated now, Telnet has a storied history and even today, offers some unexpectedly fun uses that you can enjoy right from your keyboard.

See related video in my Youtube channel.

What is Telnet?

Telnet, short for “TELetype NETwork,” is one of the earliest protocols used for accessing remote computers over the internet or a local network. Telnet allows users to connect to remote servers and interact with them as if they were local, using a text-based interface. Before graphical user interfaces (GUIs) became the norm, Telnet was a fundamental tool for system administrators, developers, and anyone needing remote access to a computer.

Telnet operates on the client-server model. A Telnet client connects to a Telnet server via the command line or a terminal emulator, and once connected, users can execute commands on the remote machine. It was a revolutionary tool in its time, but it lacks the security features of more modern protocols like SSH (Secure Shell). As a result, Telnet has largely fallen out of favour for secure communications but remains a fascinating relic of the early internet.

Two Fun Uses of Telnet

Despite its outdated nature, Telnet can still provide a surprising amount of entertainment. Here are two fun and nostalgic uses of Telnet that you can try out:

1. Watch Star Wars in ASCII Art

One of the most delightful Easter eggs hidden on the internet is the ability to watch “Star Wars: Episode IV – A New Hope” rendered entirely in ASCII art via Telnet. This project, created by Simon Jansen, captures the magic of the iconic film using nothing but characters from the ASCII table.

How to Watch:

  1. Open your terminal or command prompt.
  2. Type the following command and press Enter: telnet towel.blinkenlights.nl

You will be greeted with a surprisingly detailed rendition of the Star Wars universe, complete with scrolling text and iconic scenes—all crafted with ASCII characters. It’s a testament to the creativity of early internet enthusiasts and a fun way to revisit a classic film.

2. Relive the Max Headroom Phenomenon

Max Headroom, the iconic 1980s character known for his glitchy, computer-generated appearance and stuttering speech, became a symbol of futuristic tech and cyberpunk aesthetics. While Max Headroom’s origins lie in TV, movies, and commercials, you can experience a bit of this retro-futuristic character through Telnet.

How to Connect:

  1. Open your terminal or command prompt.
  2. Type the following command and press Enter: telnet 1984.ws

You’ll be greeted with a Max Headroom emulation that pays homage to the quirky and groundbreaking character. It’s a fun way to dive into the retro-futuristic world that captivated audiences in the 80s.

How to Exit Telnet

While exploring Telnet is fun, knowing how to exit the session is equally important. Exiting Telnet sessions can vary slightly depending on the client and the server configuration, but here are the general steps:

  1. Use the escape sequence:
    • Typically, you can use the escape sequence Ctrl+] (hold Ctrl and press ]). This should bring you to the Telnet command prompt (telnet>).
  2. Close the connection:
    • Once at the Telnet command prompt, type quit or exit and press Enter. This should close the Telnet session and return you to your original command prompt.
  3. Alternative method:
    • If the above methods don’t work, simply closing the terminal or command prompt window will also terminate the Telnet session.
  4. If your console is weird after telnet, run “reset”

Conclusion

Telnet may no longer be the go-to tool for remote computing, but its legacy lives on in unexpected ways. Whether you’re an old-school tech enthusiast or just looking for a bit of nostalgic fun, exploring Telnet can be a rewarding experience. From watching Star Wars in ASCII art to reliving the Max Headroom phenomenon, these hidden gems highlight the enduring creativity and innovation of early internet culture. So, fire up your terminal, connect to a Telnet server, and take a step back in time—you might just be surprised by what you find. And when you’re ready to log off, just remember those simple steps to exit. Happy exploring!

Exploring Steganography with Hidden Unicode Characters

In the digital age, where information security is paramount, steganography has emerged as a fascinating and subtle method for concealing information. Unlike traditional encryption, which transforms data into a seemingly random string, steganography hides information in plain sight. One intriguing technique is the use of hidden Unicode characters in plain text, an approach that combines simplicity with stealth.

Related video from my Youtube channel:

What is Steganography?

Steganography, derived from the Greek words “steganos” (hidden) and “graphein” (to write), is the practice of concealing messages or information within other non-suspicious messages or media. The goal is not to make the hidden information undecipherable but to ensure that it goes unnoticed. Historically, this could mean writing a message in invisible ink between the lines of an innocent letter. In the digital realm, it can involve embedding data in images, audio files, or text.

The Role of Unicode in Text Steganography

Unicode is a universal character encoding standard that allows for text representation from various writing systems. It includes many characters, including letters, numbers, symbols, and control characters. Some of these characters are non-printing or invisible, making them perfect for hiding information within plain text without altering its visible appearance.

How Does Unicode Steganography Work?

Unicode steganography leverages the non-printing characters within the Unicode standard to embed hidden messages in plain text. These characters can be inserted into the text without affecting its readability or format. Here’s a simple breakdown of the process:

  1. Choose Hidden Characters: Unicode offers several invisible characters, such as the zero-width space (U+200B), zero-width non-joiner (U+200C), and zero-width joiner (U+200D). These characters do not render visibly in the text.
  2. Encode the Message: Convert the hidden message into a binary or encoded format. Each bit or group of bits can be represented by a unique combination of invisible characters.
  3. Embed the Message: Insert the invisible characters into the plain text at predetermined positions or intervals, embedding the hidden message within the regular text.
  4. Extract the Message: A recipient who knows the encoding scheme can extract the invisible characters from the text and decode the hidden message.

Example: Hiding a Message

Let’s say we want to hide the message “Hi” within the text “Hello World”. First, we convert “Hi” into binary (using ASCII values):

  • H = 72 = 01001000
  • i = 105 = 01101001

Next, we map these binary values to invisible characters. For simplicity, let’s use the zero-width space (U+200B) for ‘0’ and zero-width non-joiner (U+200C) for ‘1’. The binary for “Hi” becomes a sequence of these characters:

  • H: 01001000 → U+200B U+200C U+200B U+200B U+200C U+200B U+200B U+200B
  • i: 01101001 → U+200B U+200C U+200C U+200B U+200C U+200B U+200B U+200C

We then embed this sequence in the text “Hello World”:

H\u200B\u200C\u200B\u200B\u200C\u200B\u200B\u200B e\u200B\u200C\u200C\u200B\u200C\u200B\u200B\u200C llo World

To the naked eye, “Hello World” appears unchanged, but the hidden message “Hi” is embedded within.

Advantages and Disadvantages

Advantages:

  • Subtlety: The hidden information is invisible to the casual observer.
  • Preserves Original Format: The visible text remains unaltered, maintaining readability and meaning.
  • Easy to Implement: Inserting and extracting hidden characters is straightforward with proper tools.

Disadvantages:

  • Limited Capacity: The amount of data that can be hidden is relatively small.
  • Vulnerability: If the presence of hidden characters is suspected, they can be detected and removed.
  • Dependence on Format: Changes in text formatting or encoding can corrupt the hidden message.

Practical Applications

  1. Secure Communication: Concealing sensitive messages within seemingly innocuous text.
  2. Watermarking: Embedding copyright information in digital documents.
  3. Data Integrity: Adding hidden markers to verify the authenticity of text.

Conclusion

Unicode steganography in plain text with hidden characters offers a clever and discreet way to conceal information. By understanding and utilizing the invisible aspects of Unicode, individuals can enhance their data security practices, ensuring their messages remain hidden in plain sight. As with all security techniques, it’s essential to stay informed about potential vulnerabilities and to use these methods responsibly.

Understanding Canary Tokens

In the realm of cybersecurity, staying ahead of potential threats is paramount. One innovative method that has gained traction in recent years is the use of canary tokens. These digital tripwires are designed to alert organizations to potential breaches and unauthorized access. In this blog post, we’ll explore what canary tokens are, how they work, and why they are becoming an essential tool in the cybersecurity toolkit.

Related video from my channel:

What are Canary Tokens?

Canary tokens, inspired by the canaries historically used in coal mines to detect dangerous gases, are digital markers that serve as early warning systems for unauthorized access or malicious activity. When a canary token is accessed, triggered, or interacted with in any unauthorized manner, it sends an alert to the network administrators, signaling a potential security breach.

These tokens can take various forms, including:

  1. Documents: Files with embedded tracking capabilities.
  2. Web URLs: Links that trigger alerts when visited.
  3. API Keys: Fake credentials that generate warnings when used.
  4. DNS Entries: Domain name entries that alert administrators when queried.

How Do Canary Tokens Work?

The operation of canary tokens is straightforward yet effective. Here’s a typical workflow:

  1. Deployment: Canary tokens are strategically placed within a network, embedded in documents, or distributed in ways that they appear attractive to potential attackers.
  2. Monitoring: The tokens remain dormant until they are accessed or triggered. They are designed to look like genuine assets or credentials, making them appealing targets.
  3. Alerting: When a token is accessed, it sends an alert to the administrators. This alert can be in the form of an email, SMS, or integration with a monitoring system.
  4. Response: Upon receiving an alert, administrators can investigate the breach, determine the extent of the intrusion, and take necessary actions to mitigate the threat.

Why Use Canary Tokens?

Canary tokens offer several advantages that make them a valuable addition to any cybersecurity strategy:

1. Early Detection

Canary tokens provide early warnings of potential security breaches, allowing organizations to respond quickly before significant damage occurs. This proactive approach can prevent data theft, system compromise, and other malicious activities.

2. Simplicity and Low Cost

Implementing canary tokens is relatively simple and cost-effective compared to other cybersecurity measures. They do not require complex infrastructure changes or significant financial investments.

3. Minimal False Positives

Since canary tokens are designed to be accessed only in specific scenarios, the likelihood of false positives is low. Alerts generated by canary tokens are more likely to indicate genuine security incidents.

4. Versatility

Canary tokens can be customized to fit various scenarios and environments. Whether embedded in documents, disguised as login credentials, or hidden in web applications, they can be tailored to meet specific security needs.

5. Psychological Deterrence

The knowledge that canary tokens are in place can act as a psychological deterrent for potential attackers. The risk of triggering an alert and being detected can discourage malicious activities.

Real-World Applications of Canary Tokens

Protecting Sensitive Data

Organizations dealing with sensitive information, such as financial institutions or healthcare providers, can embed canary tokens in critical files. If these files are accessed or exfiltrated, administrators are immediately alerted.

Monitoring Network Intrusions

Canary tokens can be placed within a network to detect unauthorized access. For example, creating a fake administrative login page with a canary token can reveal attempts to gain unauthorized control.

API Security

By deploying canary tokens as fake API keys, organizations can detect and track the misuse of stolen credentials. This helps in identifying compromised systems and taking corrective actions.

Conclusion

In an era where cyber threats are constantly evolving, canary tokens offer a proactive and efficient way to detect and respond to security incidents. Their simplicity, cost-effectiveness, and versatility make them an invaluable tool for organizations looking to bolster their cybersecurity defenses. By incorporating canary tokens into their security strategies, organizations can gain a critical edge in protecting their digital assets and maintaining the integrity of their networks.

Stay vigilant, stay secure, and consider deploying canary tokens as part of your comprehensive cybersecurity strategy.

Understanding PNG Format and Draw.io steganography

Introduction

Portable Network Graphics (PNG) is a popular raster graphics file format known for its lossless compression and wide support across various platforms and applications. In this blog post, we’ll delve into how PNG works, its format structure with a focus on headers and chunks, and how Draw.io leverages these features to embed drawing code within PNG files.

Related video from my Youtube channel:

The PNG Format

PNG was developed to replace the older Graphics Interchange Format (GIF). It offers several advantages, including better compression and support for a wider range of colors and transparency levels. Unlike JPEG, which is a lossy format, PNG preserves the original image quality, making it ideal for images that require precise details, such as text, graphics, and illustrations.

Structure of a PNG File

A PNG file is composed of a series of chunks. Each chunk has a specific function and structure, allowing for flexible and efficient image data storage. Here’s a breakdown of the core components of a PNG file:

  1. PNG Signature: The file starts with an 8-byte signature that identifies the file as a PNG image. This signature is essential for programs to recognize and process the file correctly.
  2. Chunks: Following the signature, the file consists of multiple chunks. Each chunk has four main parts:
    • Length (4 bytes): The length of the data field.
    • Chunk Type (4 bytes): A four-letter ASCII code specifies the chunk type.
    • Chunk Data (variable length): The data contained in the chunk.
    • CRC (4 bytes): A cyclic redundancy check value for error-checking.

There are several critical chunks, including:

  • IHDR (Image Header): Contains basic information about the image, such as width, height, bit depth, color type, compression method, filter method, and interlace method.
  • PLTE (Palette): Defines the color palette used if the image is paletted.
  • IDAT (Image Data): Contains the actual image data, compressed using the zlib algorithm.
  • IEND (Image End): Marks the end of the PNG file.

Additional chunks can store metadata, text information, and other data, enabling extended functionalities.

How Draw.io Embeds Code in PNG Files

Draw.io is an online diagramming tool that allows users to create a wide range of diagrams, from flowcharts to network diagrams. One of its unique features is the ability to embed the diagram’s XML code directly within a PNG file. This makes it easy to share and store diagrams without needing separate files for the image and the underlying code.

Here’s how Draw.io achieves this:

  1. Embedding XML in a PNG: Draw.io takes advantage of PNG’s chunk-based structure by adding a custom chunk that contains the diagram’s XML data. This chunk is typically labeled zTXt or tEXt to indicate compressed or uncompressed textual data, respectively.
  2. Custom Chunk Integration: When a user saves a diagram as a PNG in Draw.io, the application generates the diagram’s XML representation and compresses it if necessary. This XML data is then inserted into a custom chunk within the PNG file.
  3. Reading Embedded Data: When the PNG file is opened in Draw.io, the application scans the chunks, identifies the custom chunk containing the XML data, extracts it, and reconstructs the diagram based on the embedded code.

This seamless integration allows users to benefit from the portability and compatibility of the PNG format while maintaining the ability to edit and update the diagrams within Draw.io.

Conclusion

PNG is a versatile and powerful image format, and its chunk-based structure offers extensive flexibility for embedding additional data. Draw.io leverages this feature to embed the diagram’s XML code directly within PNG files, making it convenient for users to share and edit diagrams without losing any information. Understanding the inner workings of PNG and its structure not only enhances our appreciation for this format but also opens up possibilities for creative and innovative uses in various applications.

Interesting links

https://es.wikipedia.org/wiki/Portable_Network_Graphics

https://github.com/pedrooaugusto/steganography-png

Note: This post has been partly generated with Chat-GPT