FAQ

Frequently asked questions about remote-terminal.

General

What is remote-terminal?

remote-terminal is a tool that lets you access your development workspace from anywhere — via a web browser, Telegram, or using natural language with AI.

Is it free?

Yes, remote-terminal is open source and free to use. However:

  • Tailscale has a free tier (up to 100 devices)
  • OpenAI API requires a paid account for LLM mode

Is it secure?

Yes, when used correctly:

  • Tailscale provides end-to-end encryption
  • Only devices on your tailnet can access
  • Telegram bot only responds to authorized users

Setup

Do I need all three features?

No. You can use any combination:

  • Web terminal only: remote-terminal start --no-tailscale --no-telegram
  • Web + Tailscale: remote-terminal start --no-telegram
  • All features: remote-terminal start

Can I use it without Tailscale?

Yes. Use --no-tailscale flag. The terminal will only be accessible on localhost.

Can I use a different LLM?

Currently only OpenAI is supported. Claude and local LLM support is planned.

Telegram

How do I find my Telegram user ID?

Message @userinfobot on Telegram.

Can I allow multiple users?

Yes. Add comma-separated IDs:

TELEGRAM_ALLOWED_USERS=123456789,987654321

Why isn’t my bot responding?

  1. Check that TELEGRAM_BOT_TOKEN is correct
  2. Verify your user ID is in TELEGRAM_ALLOWED_USERS
  3. Make sure the server is running

Tailscale

What’s the difference between Serve and Funnel?

  • Serve: Only accessible from your tailnet (your devices)
  • Funnel: Publicly accessible from the internet

Do I need a Tailscale account?

Yes, but the free tier supports up to 100 devices.

LLM Mode

What can LLM mode do?

It can run any shell command you could run yourself. Ask it to:

  • Check git status
  • Find files
  • Read/write files
  • Run builds and tests
  • Analyze code

Is my code sent to OpenAI?

Only the command outputs you request are sent. Your full codebase is not uploaded.

Why is LLM mode slow?

It makes API calls to OpenAI, which can take a few seconds. Use a faster model like gpt-4.1-nano for quicker responses.