Posts

Showing posts from May, 2026

How to Host Your First Website on a Raspberry Pi (Beginner's Guide)

🎯 Why This Matters Have you ever wanted to share a file, a photo, or a little webpage with someone in your house, but didn't want to upload it to the internet first? Or maybe you have heard the word "server" and pictured giant, noisy computers in a freezing cold room. Here is a secret: a server is just a regular computer that waits for someone to ask it for information. Think of a server like a friendly librarian. You ask for a book, and they go find it and hand it to you. Today, we are going to turn a tiny, credit-card-sized computer called a Raspberry Pi into your very own home server. It is a fantastic, low-pressure way to learn how the internet actually works behind the scenes. By the end of this guide, you will have a real website running in your house. You will be able to visit it from your phone, your tablet, or your laptop! Don't worry if this looks confusing at first — we'll break it down step by step, and...

Podman Rootless Containers: Understanding the Copy Fail Exploit and How to Defend Against It

Rootless containers were supposed to be the safe default — no root on the host, no problem. The Copy Fail vulnerability in Podman shatters that assumption by demonstrating how a seemingly mundane file-copy operation can become a privilege escalation vector. If you're running Podman in rootless mode in CI pipelines, developer workstations, or production edge nodes, this is worth your full attention right now. 🔍 What Is the Copy Fail Vulnerability? The vulnerability centers on podman cp — the command used to copy files between a container and the host filesystem. In rootless mode, Podman uses a user namespace mapping: your unprivileged host UID (e.g., 1000) maps to UID 0 inside the container. This is the foundation of rootless isolation. The exploit arises from a race condition and improper privilege handling during the copy operation. When podman cp transfers a file from a container to the host, it temporarily operates with eleva...

How to Give Your Telegram Bot Conversation Memory with python-telegram-bot and OpenAI

A Telegram bot that forgets every message the moment it replies is barely more useful than a search bar. If you're forwarding user messages to OpenAI's Chat Completions API, you need to maintain a per-user message history array — otherwise the model has zero context, and multi-turn conversations are impossible. This is a common gap in beginner implementations, and fixing it cleanly requires understanding both where to store state and how to structure the messages payload. 🧠 Why the Bot Forgets OpenAI's Chat Completions API is stateless. Every request you send must include the full conversation history in the messages array. If you only send the latest user message, the model treats it as the first message in a brand-new conversation. Your bot isn't broken — it's just not accumulating history before each API call. The fix has two parts: Maintain an in-memory (or persistent) list of message dicts per user Appen...

System, User, and Assistant Roles in the OpenAI Chat API Explained

If you've ever peeked inside an OpenAI ChatCompletion API call, you've seen three message roles: system , user , and assistant . Most people quickly figure out that user is what you send and assistant is what the model replies — but the system role often stays mysterious. Understanding all three roles deeply is the difference between a chatbot that feels generic and one that behaves exactly the way you need it to. This post breaks down each role, shows you how they interact, and gives you production-ready patterns you can use right away. Table of Contents 🧠 How the Chat API Structures a Conversation 🎛️ The System Role: Your Model's Instruction Manual 💬 The User Role: The Human Side of the Conversation 🤖 The Assistant Role: More Than Just Replies 🔗 How the Three Roles Work Together 🛠️ Practical Patterns and Real-World Examples ⚠️ Common Mistakes and How to Avoid Them ✅ Closing Summary 🧠 How ...

OpenAI·Gemini API 키 안전하게 관리하고 토큰 비용 계산하는 법 (Python 완벽 가이드)

OpenAI와 Gemini API 키를 발급받았는데, 소스 코드에 그냥 붙여넣으면 안 된다는 건 알겠는데 그럼 어떻게 해야 할까요? 잘못 관리된 API 키 하나가 GitHub에 올라가는 순간 수백만 원짜리 청구서가 날아올 수 있습니다. 이 글에서는 Python 프로젝트에서 API 키를 안전하게 저장·로드하는 방법부터, 토큰 비용을 미리 계산해서 예상치 못한 요금 폭탄을 막는 실전 기법까지 단계별로 알려드립니다. 목차 🔑 API 키를 하드코딩하면 왜 위험한가 📦 .env 파일과 python-dotenv로 키 관리하기 🖥️ 환경 변수(OS Environment Variable)로 관리하기 🛡️ .gitignore 설정으로 키 유출 원천 차단 ⚙️ OpenAI·Gemini 키 로드 및 연결 검증 코드 💰 토큰 비용 계산: 요금 폭탄 막는 법 🔒 실무에서 쓰는 고급 보안 패턴 ✅ 마무리 요약 🔑 API 키를 하드코딩하면 왜 위험한가 API 키는 비밀번호와 같습니다. 소스 코드 안에 직접 쓰면 아래와 같은 경로로 유출됩니다. GitHub 공개 저장소 업로드 : 실수로 public repo에 push하면 수 분 내에 봇이 스캔해 키를 탈취합니다. 코드 공유·스크린샷 : 질문 게시판에 코드를 붙여넣을 때 키가 함께 노출됩니다. 로그 파일 유출 : 에러 로그에 환경 변수 전체가 찍히는 경우도 있습니다. OpenAI와 Google 모두 유출된 키를 감지하면 즉시 비활성화하지만, 그 전에 이미 수천 달러의 요청이 발생할 수 있습니다. 예방이 유일한 답입니다. graph TD A["소스 코드에 API 키 하드코딩"] --> B{"GitHub Push?"} B -->|"Public Repo"| C["🤖 봇이 수 분 내 스...