What is grpc?
gRPC (Google Remote Procedure Call) is an open-source, high-performance framework for building remote procedure call (RPC) APIs. It allows different systems to communicate efficiently over a network, supporting multiple programming languages.
Uses Protocol Buffers (ProtoBuf): A lightweight, efficient serialization format.
Supports Multiple Languages: Works with C++, Java, Python, Go, Node.js, and more.
Bidirectional Streaming: Enables real-time communication between clients and servers.
Efficient & Fast: Uses HTTP/2 for multiplexing, compression, and reduced latency.
Authentication & Security: Supports TLS for secure communication.
Load Balancing & Middleware Support: Works well with distributed systems.
Microservices communication
Real-time applications (e.g., chat, gaming)
High-performance APIs for mobile and web apps
Cloud-based services and distributed systems
Would you like an example of how to define a gRPC service using Protocol Buffers?
Streaming videos involves sending video data over a network in small chunks, allowing users to start watching before the entire file is downloaded. The method you choose depends on the use case (live streaming, on-demand streaming, peer-to-peer, etc.).
Progressive Streaming: The video file is downloaded sequentially, allowing playback before completion.
Adaptive Streaming (HLS, DASH, etc.): Adjusts video quality dynamically based on network conditions.
Live Streaming: Sends real-time video data to viewers.
HLS (HTTP Live Streaming): Used by Apple devices and many modern browsers.
DASH (Dynamic Adaptive Streaming over HTTP): Open standard alternative to HLS.
Steps to Stream via HLS:
Encode your video into multiple resolutions/bitrates.
Segment the video into small
.ts
(transport stream) files.
Generate an
.m3u8
playlist file listing the segments.
Serve the files via a web server or CDN.
Use a video player (like Video.js, ExoPlayer) to play the stream.
Used by platforms like YouTube Live, Twitch, Facebook Live.
Requires an RTMP server (e.g., NGINX with RTMP module, Wowza, Red5).
Steps:
Set up an RTMP server.
Use software like OBS Studio or FFmpeg to send video to the RTMP server.
Distribute the stream via a streaming platform or repackage it to HLS.
Ideal for real-time interactions (video calls, game streaming).
Works in browsers without extra plugins.
Steps:
Capture video using WebRTC APIs.
Encode and transmit using WebRTC’s peer-to-peer connections.
Use a signaling server (e.g., WebSocket) to establish connections.
FFmpeg: Encoding, decoding, and streaming videos.
GStreamer: Multimedia framework for streaming.
Nginx with RTMP Module: For setting up an RTMP server.
Video.js, Plyr: Frontend players for HLS/DASH.
OBS Studio: Live streaming software.
Would you like a specific example, such as setting up an HLS stream or an RTMP server?
Handling live streaming in Node.js depends on the streaming protocol you choose. The most common approaches are:
HLS (HTTP Live Streaming) for scalable live streaming.
RTMP (Real-Time Messaging Protocol) for ingesting streams.
WebRTC for real-time, low-latency streaming.
HLS works by splitting a video stream into small
.ts
segments and generating an
.m3u8
playlist.
Install FFmpeg (to encode and segment the video).
Use a Node.js server to serve HLS segments.
shnpm init -y npm install express
server.js
jsconst express = require("express"); const path = require("path"); const app = express(); const PORT = 3000; app.use(express.static(path.join(__dirname, "public"))); app.listen(PORT, () => console.log(`HLS server running at http://localhost:${PORT}`));
Then, use FFmpeg to stream your video as HLS:
shffmpeg -i input.mp4 -preset ultrafast -g 50 -hls_time 2 -hls_list_size 5 -f hls public/stream.m3u8
Now, serve
stream.m3u8
to a video player like
Video.js.
RTMP is commonly used to ingest live streams into platforms like YouTube and Twitch.
Install NGINX with RTMP module.
Use FFmpeg to push the stream to the RTMP server.
Use Node.js to forward the stream.
Install NGINX with RTMP module:
shsudo apt update sudo apt install nginx libnginx-mod-rtmp
Edit
NGINX config
(/etc/nginx/nginx.conf
):
nginxrtmp { server { listen 1935; application live { live on; record off; } } }
Restart NGINX:
shsudo systemctl restart nginx
Push a stream from OBS Studio or FFmpeg:
shffmpeg -re -i input.mp4 -c:v libx264 -preset fast -f flv rtmp://localhost/live/stream
Use Node.js to relay the stream:
jsconst http = require("http"); const spawn = require("child_process").spawn; http.createServer((req, res) => { res.writeHead(200, { "Content-Type": "video/mp4", "Transfer-Encoding": "chunked", }); const ffmpeg = spawn("ffmpeg", [ "-i", "rtmp://localhost/live/stream", "-c:v", "copy", "-f", "mp4", "pipe:1", ]); ffmpeg.stdout.pipe(res); }).listen(8000);
Now, access the stream at
http://localhost:8000
.
For ultra-low latency, use WebRTC.
Set up a WebRTC signaling server (WebSockets).
Use
simple-peer
or
mediasoup
for P2P connections.
Capture video and stream to peers.
Install dependencies:
shnpm install express ws socket.io
Create a WebRTC signaling server (server.js
):
jsconst express = require("express"); const http = require("http"); const socketIo = require("socket.io"); const app = express(); const server = http.createServer(app); const io = socketIo(server); io.on("connection", (socket) => { socket.on("offer", (data) => socket.broadcast.emit("offer", data)); socket.on("answer", (data) => socket.broadcast.emit("answer", data)); socket.on("ice-candidate", (data) => socket.broadcast.emit("ice-candidate", data)); }); server.listen(3000, () => console.log("WebRTC signaling server running on port 3000"));
On the client-side (index.html):
html<video id="video" autoplay></video> <script> const video = document.getElementById("video"); navigator.mediaDevices.getUserMedia({ video: true, audio: true }) .then((stream) => { video.srcObject = stream; }) .catch((err) => console.error(err)); </script>
HLS → Scalable for public streaming (e.g., YouTube-style).
RTMP → Best for ingestion into platforms.
WebRTC → Best for real-time (e.g., video calls, gaming).
Would you like a more detailed tutorial on one of these methods?
Here’s a complete WebRTC implementation in Node.js with a signaling server using Socket.IO and a simple HTML/JavaScript front end.
First, initialize a Node.js project and install required packages:
shmkdir webrtc-app && cd webrtc-app npm init -y npm install express socket.io
server.js
)
This server helps peers exchange connection details (SDP offers/answers and ICE candidates).
jsconst express = require("express"); const http = require("http"); const socketIo = require("socket.io"); const app = express(); const server = http.createServer(app); const io = socketIo(server); app.use(express.static("public")); // Serve static files from 'public' folder io.on("connection", (socket) => { console.log("New user connected:", socket.id); socket.on("offer", (data) => { socket.broadcast.emit("offer", data); }); socket.on("answer", (data) => { socket.broadcast.emit("answer", data); }); socket.on("ice-candidate", (data) => { socket.broadcast.emit("ice-candidate", data); }); socket.on("disconnect", () => { console.log("User disconnected:", socket.id); }); }); server.listen(3000, () => console.log("WebRTC signaling server running on port 3000"));
Create a
public
folder and inside it, an
index.html
file.
File:
public/index.html
html<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <meta name="viewport" content="width=device-width, initial-scale=1.0"> <title>WebRTC Video Chat</title> <script src="/socket.io/socket.io.js"></script> </head> <body> <h2>WebRTC Video Chat</h2> <video id="localVideo" autoplay playsinline></video> <video id="remoteVideo" autoplay playsinline></video> <button id="startCall">Start Call</button> <script> const socket = io(); let localStream; let peerConnection; const config = { iceServers: [{ urls: "stun:stun.l.google.com:19302" }] }; const localVideo = document.getElementById("localVideo"); const remoteVideo = document.getElementById("remoteVideo"); const startCallButton = document.getElementById("startCall"); async function startMedia() { try { localStream = await navigator.mediaDevices.getUserMedia({ video: true, audio: true }); localVideo.srcObject = localStream; } catch (err) { console.error("Error accessing media devices.", err); } } function createPeerConnection() { peerConnection = new RTCPeerConnection(config); peerConnection.onicecandidate = (event) => { if (event.candidate) { socket.emit("ice-candidate", event.candidate); } }; peerConnection.ontrack = (event) => { remoteVideo.srcObject = event.streams[0]; }; localStream.getTracks().forEach(track => { peerConnection.addTrack(track, localStream); }); } startCallButton.addEventListener("click", async () => { createPeerConnection(); const offer = await peerConnection.createOffer(); await peerConnection.setLocalDescription(offer); socket.emit("offer", offer); }); socket.on("offer", async (offer) => { if (!peerConnection) createPeerConnection(); await peerConnection.setRemoteDescription(new RTCSessionDescription(offer)); const answer = await peerConnection.createAnswer(); await peerConnection.setLocalDescription(answer); socket.emit("answer", answer); }); socket.on("answer", async (answer) => { await peerConnection.setRemoteDescription(new RTCSessionDescription(answer)); }); socket.on("ice-candidate", async (candidate) => { try { await peerConnection.addIceCandidate(new RTCIceCandidate(candidate)); } catch (err) { console.error("Error adding ICE candidate", err); } }); startMedia(); </script> </body> </html>
Start the server:
shnode server.js
Then,
open
http://localhost:3000
in two separate browser tabs or devices to test video
calling.
Users access the page and grant camera/microphone permissions.
When a user clicks Start Call, it:
Creates a
RTCPeerConnection
Captures media streams
Sends an SDP offer to the other user via WebSockets.
The second user receives the offer, sets it as remote description, and sends back an SDP answer.
Both peers exchange ICE candidates for NAT traversal.
Once the connection is established, media streams flow between users.
Deploy it using a cloud provider like Vercel or Heroku.
Enhance UI with CSS and buttons for muting/camera toggle.
Improve scalability with a TURN server for better NAT traversal.
Use WebRTC libraries
like
simple-peer
or
mediasoup
for easier handling.
Would you like help with TURN/STUN setup for better connectivity?
WebRTC stands for Web Real-Time Communication. It is an open-source technology that enables real-time audio, video, and data sharing directly between web browsers and mobile applications without requiring external plugins or third-party software.
Peer-to-peer (P2P) communication for low-latency connections.
Supports audio, video, and data channels for file transfer and messaging.
Uses STUN and TURN servers for NAT traversal (handling firewalls and network restrictions).
Works across different platforms and devices (Chrome, Firefox, Edge, Safari, mobile apps, etc.).
Would you like a deeper dive into how WebRTC works under the hood?
The format in which video data is streamed in WebRTC depends on the encoding and transport protocols used. WebRTC does not define a specific container format like MP4; instead, it uses RTP (Real-time Transport Protocol) to transmit encoded media streams over the network.
WebRTC supports these major video codecs:
VP8 → Default codec, efficient for most browsers.
VP9 → More advanced, better compression, but requires more processing power.
H.264 (AVC) → Widely supported, especially on mobile devices.
AV1 → Newer, highly efficient but not universally supported yet.
Opus → Default, high-quality audio codec optimized for real-time communication.
G.711 → Legacy codec with wide compatibility but lower quality.
RTP (Real-time Transport Protocol) → WebRTC streams media using RTP over UDP, ensuring low latency.
SRTP (Secure RTP) → Adds encryption to RTP for secure transmission.
RTCP (RTP Control Protocol) → Monitors connection quality and provides feedback.
WebRTC does not use MP4, MKV, or AVI for streaming.
Instead, it sends raw encoded video frames (e.g., VP8, VP9, H.264) in RTP packets.
These packets are decoded in real-time by the receiving browser or app.
Technology | Encoding | Transport | Latency | Use Case |
---|---|---|---|---|
WebRTC | VP8, VP9, H.264 | RTP over UDP | Ultra-low | Video calls, real-time streaming |
HLS | H.264, HEVC | HTTP (TCP) | High (5-30s) | Large-scale streaming (YouTube, Netflix) |
RTMP | H.264 | TCP | Low (~1-3s) | Ingesting streams to YouTube/Twitch |
DASH | VP9, H.264 | HTTP (TCP) | Medium (3-10s) | Adaptive streaming |
Would you like help implementing a specific codec or optimizing for low latency?
Since WebRTC does not directly support MP4 streaming, we need to:
Read the MP4 file,
Decode & transcode it to a WebRTC-compatible format (VP8, H.264, or VP9),
Stream the frames to connected clients over WebRTC.
Use FFmpeg to read and encode MP4 video in a WebRTC-compatible format.
Use Node.js and WebSockets for signaling.
Use WebRTC PeerConnection API to stream video to the client.
Make sure FFmpeg is installed on your system:
On Linux/macOS:
shsudo apt install ffmpeg # Ubuntu/Debian brew install ffmpeg # macOS
On Windows, download and install FFmpeg from FFmpeg.org.
shmkdir webrtc-mp4-stream && cd webrtc-mp4-stream npm init -y npm install express socket.io wrtc child_process
server.js
)
This script:
Uses FFmpeg to convert MP4 into a WebRTC-compatible format.
Streams video frames to connected clients.
jsconst express = require("express"); const http = require("http"); const socketIo = require("socket.io"); const { spawn } = require("child_process"); const wrtc = require("wrtc"); const app = express(); const server = http.createServer(app); const io = socketIo(server); app.use(express.static("public")); // Serve the client page let clients = []; // Store connected peers io.on("connection", (socket) => { console.log("New client connected:", socket.id); socket.on("offer", async (offer) => { const peerConnection = new wrtc.RTCPeerConnection(); const stream = await createFFmpegStream(); stream.getTracks().forEach(track => peerConnection.addTrack(track, stream)); await peerConnection.setRemoteDescription(offer); const answer = await peerConnection.createAnswer(); await peerConnection.setLocalDescription(answer); socket.emit("answer", peerConnection.localDescription); peerConnection.onicecandidate = (event) => { if (event.candidate) { socket.emit("ice-candidate", event.candidate); } }; clients.push(peerConnection); }); socket.on("ice-candidate", (candidate) => { clients.forEach(peer => peer.addIceCandidate(candidate)); }); socket.on("disconnect", () => { clients = clients.filter(peer => peer.socket !== socket); console.log("Client disconnected:", socket.id); }); }); // Function to create a WebRTC-compatible video stream from an MP4 file async function createFFmpegStream() { const ffmpeg = spawn("ffmpeg", [ "-re", // Read input at native frame rate "-i", "video.mp4", // Input file "-an", // Disable audio "-c:v", "libvpx", // VP8 codec (compatible with WebRTC) "-b:v", "1M", // Bitrate "-f", "webm", // WebM container "pipe:1" // Output to stdout ]); const stream = new wrtc.MediaStream(); const videoTrack = new wrtc.nonstandard.RTCVideoSource(ffmpeg.stdout); const track = new wrtc.nonstandard.RTCVideoTrack(videoTrack); stream.addTrack(track); return stream; } server.listen(3000, () => console.log("Server running on http://localhost:3000"));
public/index.html
)
This will handle the video playback.
html<!DOCTYPE html> <html lang="en"> <head> <meta charset="UTF-8"> <title>WebRTC MP4 Streaming</title> <script src="/socket.io/socket.io.js"></script> </head> <body> <h2>WebRTC Video Streaming</h2> <video id="video" autoplay playsinline></video> <button id="startStream">Start Streaming</button> <script> const socket = io(); const videoElement = document.getElementById("video"); let peerConnection; const config = { iceServers: [{ urls: "stun:stun.l.google.com:19302" }] }; document.getElementById("startStream").addEventListener("click", async () => { peerConnection = new RTCPeerConnection(config); peerConnection.onicecandidate = (event) => { if (event.candidate) { socket.emit("ice-candidate", event.candidate); } }; peerConnection.ontrack = (event) => { videoElement.srcObject = event.streams[0]; }; const offer = await peerConnection.createOffer(); await peerConnection.setLocalDescription(offer); socket.emit("offer", offer); }); socket.on("answer", async (answer) => { await peerConnection.setRemoteDescription(new RTCSessionDescription(answer)); }); socket.on("ice-candidate", async (candidate) => { await peerConnection.addIceCandidate(new RTCIceCandidate(candidate)); }); </script> </body> </html>
Place an
MP4 video file
named
video.mp4
in the project folder.
Start the server:
shnode server.js
Open
http://localhost:3000
in a browser and click
Start Streaming.
FFmpeg converts the MP4 file to a WebRTC-compatible VP8/WebM stream.
The WebRTC server (Node.js + WebSockets) creates a peer connection and streams video frames to connected clients.
The
client browser receives the stream
and plays it in the
<video>
element.
Add Audio Support: Modify the FFmpeg command to include
-c:a opus
for WebRTC-compatible audio.
Use H.264 Codec: Replace
-c:v libvpx
with
-c:v libx264
for better compatibility on Safari.
Stream from a URL: Instead of a local file, use
-i http://example.com/video.mp4
.
Handle Multiple Clients: Manage multiple WebRTC connections dynamically.
Would you like help improving performance or handling multiple viewers?
gRPC is not designed for real-time video streaming. It is primarily used for efficient, high-performance communication between microservices, handling structured data efficiently over HTTP/2. However, it has limitations when it comes to real-time media streaming.
Designed for Request-Response Communication
gRPC works best for structured RPC calls, not continuous media streams.
No Native Support for RTP/WebRTC
Video streaming typically requires RTP (Real-time Transport Protocol) or WebRTC, which gRPC does not support.
Based on HTTP/2, Which Is Not Optimal for Streaming
gRPC uses streams over HTTP/2, which introduces latency compared to protocols like UDP (used in WebRTC).
Compression & Framing Issues
Video needs specialized codecs (e.g., H.264, VP8) and real-time packet handling, which gRPC does not optimize for.
While gRPC is not ideal for real-time video transmission, it can be used in supporting roles in a video streaming system:
✅ Metadata Exchange & Control Signals
Managing stream initiation, authentication, user preferences, or quality control settings.
✅ Chunked File Transfer (VOD, Not Live)
If you’re not doing real-time streaming, you can send video as binary chunks using gRPC streaming.
✅ Hybrid Approach
Use gRPC for control signaling (e.g., start/stop commands).
Use WebRTC/RTMP/HLS/DASH for actual video transmission.
Technology | Use Case | Protocol |
---|---|---|
WebRTC | Low-latency, peer-to-peer video calls | UDP + RTP |
RTMP | Live streaming to platforms like YouTube/Twitch | TCP |
HLS/DASH | Large-scale video-on-demand (VOD) | HTTP (Segmented) |
RTP + RTSP | Professional-grade live streaming | UDP/TCP |
✅
Use gRPC for signaling and metadata
exchange.
❌
Do NOT use gRPC for real-time video
transport.
👉
Use WebRTC, RTP, or RTMP for the actual video
streaming.
Would you like an example of a hybrid WebRTC + gRPC implementation?
tRPC (TypeScript Remote Procedure Call) is a TypeScript-first, type-safe framework for building APIs without needing a REST or GraphQL schema. It allows strongly-typed communication between a client and server using TypeScript, eliminating the need for manual API contracts.
✅
End-to-End Type Safety
→ Automatically infers types between client &
server.
✅
No Boilerplate
→ No need to define REST endpoints or GraphQL
schemas.
✅
Works with Any Frontend
→ Compatible with React, Next.js, Vue, and more.
✅
Built on Top of JSON-RPC
→ Uses function calls instead of RESTful endpoints.
✅
Lightweight and Fast
→ No need for extra API layers like GraphQL resolvers.
Instead of creating traditional API endpoints (GET /users
,
POST /order
), you define
procedures
that the client can call
like functions.
Server: Define API Routes
tsimport { initTRPC } from "@trpc/server"; const t = initTRPC.create(); export const appRouter = t.router({ hello: t.procedure.query(() => "Hello, tRPC!"), }); export type AppRouter = typeof appRouter;
Client: Call the API Like a Function
tsimport { createTRPCProxyClient } from "@trpc/client"; import type { AppRouter } from "./server"; // Import types from the server const client = createTRPCProxyClient<AppRouter>({ url: "http://localhost:4000" }); const result = await client.hello.query(); // "Hello, tRPC!"
Feature | tRPC | REST | GraphQL |
---|---|---|---|
Type Safety | ✅ Full | ❌ Manual | ⚠️ Partial |
Boilerplate | ✅ Minimal | ❌ Requires controllers | ⚠️ Needs resolvers |
Performance | ✅ Fast | ⚠️ Varies | ❌ Overhead |
Schema Definition | ❌ No Schema Needed | ✅ OpenAPI | ✅ SDL |
Batch Requests | ✅ Yes | ❌ No | ✅ Yes |
Full-stack TypeScript apps (e.g., Next.js, React, Vue, Svelte).
Microservices communication with type safety.
Alternative to GraphQL for frontend-heavy applications.
Would you like a full-stack example with Next.js?
Batch requests allow multiple API calls to be grouped into a single request and processed together, rather than making individual requests for each piece of data. This can improve performance, reduce network overhead, and simplify the process of making multiple calls to an API.
Performance Optimization:
By grouping requests, the client can reduce the
total number of network requests and response
times, improving performance.
Simplified Client Logic:
Instead of handling multiple individual API calls,
the client can make a single request and process
the result more efficiently.
Error Handling:
Instead of managing errors for each individual
request, batch processing can return all results
or errors in a single response.
Reduced Overhead:
Fewer HTTP requests mean less overhead on both the
client and server sides, reducing server load and
response times.
Typically, a batch request involves wrapping multiple API calls inside a single payload, often as an array or a set of function calls. Each call will return its response in a specific order.
Batch Request Format (JSON)
json{ "requests": [ { "method": "GET", "url": "/user/1" }, { "method": "POST", "url": "/order", "body": { "item": "laptop" } }, { "method": "GET", "url": "/products" } ] }
Response
json{ "responses": [ { "status": 200, "data": { "id": 1, "name": "John Doe" } }, { "status": 201, "data": { "orderId": 1234, "status": "created" } }, { "status": 200, "data": [ /* list of products */ ] } ] }
In tRPC, batch requests can be implemented by allowing the client to call multiple procedures in one go. The client can send a request with several method calls, and the server processes them and returns a batch of responses.
For example, with tRPC, the client can call multiple queries or mutations in a single request, and the server responds with each result in the same response.
Less Overhead: By sending multiple requests in a single HTTP request, you reduce the total overhead for making multiple round-trips to the server.
Atomicity: You can ensure that all requests are processed together. If one request fails, all the others may also be rolled back, depending on how the server handles the batch.
Simplified Communication: Especially useful in applications where multiple related data points need to be fetched or modified simultaneously.
GraphQL supports batching by allowing multiple queries or mutations to be sent in a single request.
graphql{ user(id: 1) { name email } posts { title author { name } } }
Some REST APIs implement batch requests manually, where the client sends multiple requests in a single payload.
Would you like a practical example of how to implement batch requests in tRPC or another framework?
In tRPC, you can handle batch requests by grouping multiple procedures into a single call. While tRPC doesn’t have native support for batch requests out of the box like GraphQL, you can create a batch-like behavior by leveraging tRPC’s query and mutation system.
tRPC allows calling multiple procedures in a single request by invoking different queries or mutations sequentially.
You can simulate a batch request by having the client call multiple procedures, and then the server can respond with the results of each procedure.
Let’s create a basic tRPC router with multiple queries that we want to group together in a single call:
tsimport { initTRPC } from "@trpc/server"; const t = initTRPC.create(); export const appRouter = t.router({ getUser: t.procedure.input((id: string) => id).query((opts) => { // Simulate a DB fetch return { id: opts.input, name: `User ${opts.input}` }; }), getPosts: t.procedure.query(() => { // Simulate fetching posts return [ { title: "Post 1", content: "Content of Post 1" }, { title: "Post 2", content: "Content of Post 2" }, ]; }), }); // Export the type of the router export type AppRouter = typeof appRouter;
On the client side, you can make a
batch request
by calling multiple procedures in a single call, using
tRPC
client:
tsimport { createTRPCProxyClient } from '@trpc/client'; import { AppRouter } from './server'; // Import types from your server const client = createTRPCProxyClient<AppRouter>({ url: 'http://localhost:4000', }); async function fetchBatchData() { try { // Calling multiple queries as part of the batch request const [user, posts] = await Promise.all([ client.getUser.query('1'), // Call the getUser query client.getPosts.query(), // Call the getPosts query ]); console.log('User:', user); console.log('Posts:', posts); } catch (error) { console.error('Error fetching data:', error); } } fetchBatchData();
The client sends a
single HTTP request
with
multiple queries
inside it (getUser.query('1')
and
getPosts.query()
).
These queries are executed
concurrently
using
Promise.all
for batch execution. You could also process them
sequentially if needed.
The server processes each query and returns the results as a combined response in one go.
tRPC
with Batch Mutations
If you need to batch mutations, you can follow a similar approach:
tsexport const appRouter = t.router({ createUser: t.procedure.input((name: string) => name).mutation((opts) => { // Simulate user creation return { id: Math.random().toString(36).substr(2, 9), name: opts.input }; }), createPost: t.procedure.input((title: string) => title).mutation((opts) => { // Simulate post creation return { id: Math.random().toString(36).substr(2, 9), title: opts.input }; }), });
tsasync function createBatch() { try { // Call multiple mutations in parallel const [user, post] = await Promise.all([ client.createUser.mutation('John Doe'), client.createPost.mutation('My New Post'), ]); console.log('Created User:', user); console.log('Created Post:', post); } catch (error) { console.error('Error creating data:', error); } } createBatch();
Performance: Calling multiple procedures at once can reduce the number of network requests and make the application more efficient.
Atomicity: If using multiple mutations, you can ensure the procedures run in parallel, and then handle errors or success together.
Convenience: Simplifies client-side logic by grouping related requests into a single batch.
While
tRPC
doesn't have built-in support for
batch requests
like GraphQL, you can easily
simulate batch-like behavior
by calling multiple queries or mutations in parallel
using
Promise.all
. This approach allows you to handle
multiple procedures
in a single network call and process them more
efficiently.
Would you like more details on how to handle errors or success across multiple queries in a batch?
No, tRPC does not use Protocol Buffers (protobufs).
tRPC is a TypeScript-first framework that focuses on type safety between the client and server. It uses TypeScript types to automatically infer the structure of your API calls, and this eliminates the need for defining additional schemas like protobufs.
With tRPC, the types you define on the server are directly accessible on the client, so there’s no need for a schema definition language (like protobufs, JSON Schema, or GraphQL schemas). The framework leverages TypeScript's type inference to automatically handle types for both the client and the server.
tRPC uses TypeScript types to generate type-safe API calls between the client and the server.
Protocol Buffers (protobufs)
are used to define
binary serialization formats
for APIs and data communication between systems,
often in microservices architectures. It requires
manual serialization and deserialization of data
and usually has to be defined in a
.proto
file.
Protobufs are often used in situations where:
Cross-language communication is required (e.g., a service written in Go communicates with a service in Python).
Optimized binary serialization is needed for large-scale applications or microservices.
tRPC leverages TypeScript and provides end-to-end type safety without needing additional serialization tools like protobufs.
Protobufs are used for binary data serialization, and are usually applied in situations requiring cross-language data exchange or performance optimization in terms of size and speed.
If you're using tRPC, you do not need protobufs as everything is type-safe and based on TypeScript types.
Would you like to dive deeper into how tRPC handles data serialization or learn more about other communication frameworks that use protobufs?
Under the hood, tRPC uses JSON as the default data transfer format for communication between the client and server. When a client makes a request to the server (either a query or mutation), the data is typically serialized into JSON and transmitted via HTTP (or WebSockets, depending on the setup).
Client-Server Communication:
When a client calls a procedure (query or
mutation) on the server, the client sends the
request to the server, and the server processes
the request. The
input data
(e.g., parameters or payload) is serialized to
JSON
format and sent over
HTTP
or
WebSockets.
Request Format:
A typical request in
tRPC
is sent as a
JSON object, containing information about the procedure
being called, its arguments, and any other
required metadata.
For example, when calling a query:
json{ "jsonrpc": "2.0", "method": "getUser", "params": ["1"], "id": "12345" }
"jsonrpc" specifies the version.
"method"
is the procedure being called (getUser
in this case).
"params"
contains the parameters for the procedure
(["1"]
for the user ID).
"id" is used for tracking the request (especially useful when batching requests).
Response Format:
Once the server processes the request, it sends a
JSON response
back to the client, typically containing the
result or any error information.
For example:
json{ "jsonrpc": "2.0", "result": { "id": "1", "name": "John Doe" }, "id": "12345" }
Serialization:
Data is serialized to JSON on both the client and server sides. This is the most common format for web APIs because of its ease of use, human-readability, and support in almost all programming languages.
HTTP / WebSocket:
By default,
tRPC
uses
HTTP
to transfer data between the client and the
server. However, it can also be configured to use
WebSockets
for real-time communication, where the same
JSON-based communication
is used but over a persistent connection.
Interoperability: JSON is supported by almost all modern web frameworks, making it easy to integrate with existing applications.
Human-readable: JSON is text-based and easy to debug, which is a big advantage during development.
TypeScript Support: Since tRPC is designed for TypeScript-first applications, working with JSON makes sense as it is compatible with TypeScript’s type system and easy to handle natively.
The client sends a JSON-encoded request to the server (e.g., via HTTP POST or WebSocket message).
The server processes the request using the specified procedure (query or mutation), and the response data is JSON-encoded and sent back to the client.
Both client and server communicate over a text-based protocol (JSON) over HTTP (or WebSockets), making it simple and efficient.
In contrast to frameworks like gRPC that use Protocol Buffers (protobufs) for efficient binary serialization, tRPC sticks with JSON for simplicity, ease of use, and full compatibility with TypeScript. While gRPC focuses on performance optimizations, tRPC focuses on developer experience and seamless integration with TypeScript.
tRPC uses JSON as the format for serializing and transferring data between the client and server.
Data is sent via HTTP or WebSocket using JSON-encoded requests and responses.
This approach makes tRPC easy to use in TypeScript-based applications while offering a clean and efficient communication protocol for most web-based applications.
Would you like to see an example of how the data is sent and received in a tRPC application?
To implement gRPC in Node.js, you'll need to follow a few steps. Here's a simple guide to help you set up a gRPC server and client in Node.js.
First, you'll need to install gRPC and Protobuf dependencies. You can install them using npm:
bashnpm install @grpc/grpc-js @grpc/proto-loader
@grpc/grpc-js
: The gRPC library for Node.js.
@grpc/proto-loader
: A utility for loading
.proto
files (Protocol Buffers).
Create a file called
service.proto
to define the gRPC service and message structure.
proto// service.proto syntax = "proto3"; package example; // The service definition service Greeter { // Sends a greeting rpc SayHello (HelloRequest) returns (HelloReply); } // The request message containing the user's name message HelloRequest { string name = 1; } // The response message containing the greetings message HelloReply { string message = 1; }
This defines a simple
Greeter
service with a method
SayHello
, which takes a
HelloRequest
and returns a
HelloReply
.
Now let's implement the server using the
@grpc/grpc-js
library in Node.js.
js// server.js const grpc = require('@grpc/grpc-js'); const protoLoader = require('@grpc/proto-loader'); // Load the protobuf file const PROTO_PATH = './service.proto'; const packageDefinition = protoLoader.loadSync(PROTO_PATH, {}); const protoDescriptor = grpc.loadPackageDefinition(packageDefinition); const greeter = protoDescriptor.example.Greeter; // Implement the SayHello function function sayHello(call, callback) { const name = call.request.name || 'World'; callback(null, { message: `Hello, ${name}!` }); } // Create the gRPC server const server = new grpc.Server(); // Add the service to the server server.addService(greeter.service, { SayHello: sayHello, }); // Start the server on port 50051 server.bindAsync('127.0.0.1:50051', grpc.ServerCredentials.createInsecure(), () => { console.log('Server running at http://127.0.0.1:50051'); server.start(); });
Now, implement the client that will communicate with the server.
js// client.js const grpc = require('@grpc/grpc-js'); const protoLoader = require('@grpc/proto-loader'); // Load the protobuf file const PROTO_PATH = './service.proto'; const packageDefinition = protoLoader.loadSync(PROTO_PATH, {}); const protoDescriptor = grpc.loadPackageDefinition(packageDefinition); const greeter = protoDescriptor.example.Greeter; // Create a client const client = new greeter('localhost:50051', grpc.credentials.createInsecure()); // Make a call to the SayHello method client.SayHello({ name: 'Node.js' }, (error, response) => { if (error) { console.error(error); } else { console.log('Greeting:', response.message); } });
Run the Server:
In one terminal window, run the server:
bashnode server.js
The server should start and listen on
localhost:50051
.
Run the Client:
In another terminal window, run the client:
bashnode client.js
The client will call the
SayHello
method of the server and you should see a response
like:
Greeting: Hello, Node.js!
Protobuf File (.proto
): Defines the structure of messages and services,
including the
SayHello
method.
Server:
Loads the protobuf definition using
@grpc/proto-loader
.
Implements the method
sayHello
that responds to the
SayHello
request.
Binds the server to
localhost:50051
.
Client:
Loads the protobuf definition and creates a client.
Sends a
SayHello
request with the name
"Node.js"
and prints the response.
This example shows how to set up a simple gRPC server and client in Node.js. You can expand on this by adding more complex services and methods, handling errors, and using secure communication with SSL/TLS for production environments.
Let me know if you want to add more functionality, or need help with any other part of gRPC!
To call a tRPC API from another application (which might not be in the same codebase), you can set up HTTP requests to interact with the tRPC server. Since tRPC is designed with TypeScript and doesn't require a schema definition (like GraphQL or gRPC), you typically interact with it using REST-like HTTP requests.
Set Up the tRPC Server:
If you haven’t already, you need a
tRPC server
running. Here's a simple setup using
Next.js
and
tRPC:
Install tRPC Dependencies:
bashnpm install @trpc/server @trpc/client
Create the tRPC Router and
Procedures
(/src/server/routers/appRouter.ts
):
ts// appRouter.ts import { createRouter } from '@trpc/server'; import { z } from 'zod'; export const appRouter = createRouter() .query('getUser', { input: z.string(), resolve({ input }) { // Return mock data based on input return { id: input, name: 'John Doe' }; }, }) .mutation('createUser', { input: z.object({ name: z.string(), }), resolve({ input }) { // Simulate user creation return { id: '1', name: input.name }; }, });
Create the tRPC API Handler in your server-side framework (e.g., Next.js API route):
ts// pages/api/trpc/[trpc].ts (for Next.js) import * as trpc from '@trpc/server/adapters/next'; import { appRouter } from '../../src/server/routers/appRouter'; export default trpc.createNextApiHandler({ router: appRouter, createContext: () => null, });
This sets up a simple tRPC server with a couple of
procedures (getUser
,
createUser
).
Calling tRPC from Another Application:
To call the tRPC API from an external application,
you’ll be interacting with the HTTP API endpoints
exposed by the tRPC server.
Here’s how you can call the tRPC API from another application using HTTP requests.
Using Axios or Fetch: Since tRPC is based on HTTP, you can send requests via HTTP libraries like Axios or fetch.
First, install Axios:
bashnpm install axios
Then, create the function to make the tRPC call:
js// client.js const axios = require('axios'); async function callTrpc() { try { const response = await axios.post('http://localhost:3000/api/trpc/getUser', { jsonrpc: '2.0', method: 'getUser', params: ['1'], // the input you need for the getUser query id: '12345', // request ID }); console.log('Response from tRPC:', response.data); } catch (error) { console.error('Error calling tRPC:', error); } } callTrpc();
In this example:
We're sending a
POST
request to the tRPC API with
jsonrpc
,
method
,
params
, and
id
.
Replace
http://localhost:3000/api/trpc/getUser
with your tRPC server's endpoint.
You can use the native
fetch
API as well:
jsasync function callTrpc() { const response = await fetch('http://localhost:3000/api/trpc/getUser', { method: 'POST', headers: { 'Content-Type': 'application/json', }, body: JSON.stringify({ jsonrpc: '2.0', method: 'getUser', params: ['1'], id: '12345', }), }); const data = await response.json(); console.log('Response from tRPC:', data); } callTrpc();
The response from tRPC will usually be in the following format:
json{ "jsonrpc": "2.0", "result": { "id": "1", "name": "John Doe" }, "id": "12345" }
You can extract the
result
from the response and use it as needed in your
application.
When calling tRPC methods, always handle errors
properly. If the procedure fails or there is an issue
with the request, the server will respond with an
error in the
error
field.
For example, if the input is invalid:
json{ "jsonrpc": "2.0", "error": { "code": -32602, "message": "Invalid params" }, "id": "12345" }
Set up tRPC Server with appropriate procedures and handlers.
Make HTTP Requests from another application using Axios or fetch to interact with tRPC server endpoints.
Handle Responses in JSON format, and manage success and error states.
Yes, tRPC is indeed best suited for full-stack applications. Here's why:
tRPC provides type-safe communication between the client and server. This means that you get autocompletion, type checking, and error catching, making it a great choice for full-stack applications where you have a shared codebase between the client and server.
Since tRPC uses TypeScript to infer types automatically from the server-side procedures and make those types available to the client, this leads to a seamless developer experience and reduces runtime errors related to type mismatches.
Unlike GraphQL or REST APIs, where you define schemas and types separately, tRPC allows you to define procedures directly in your backend and then automatically infer types on the frontend. This reduces boilerplate code and simplifies the workflow for full-stack apps.
tRPC works great with Next.js and other full-stack frameworks that combine frontend and backend in one project. It easily integrates with API routes in Next.js, making it easy to define procedures and handle requests in one unified codebase.
In a full-stack setup, tRPC enables you to directly call server-side functions from the client without having to deal with HTTP requests, endpoints, or API documentation. It abstracts away the need for traditional REST or GraphQL endpoints, allowing you to call backend logic seamlessly.
While not a core feature of tRPC, you can still implement real-time updates (e.g., with WebSockets or Server-Sent Events) within your full-stack app. This can be done by extending tRPC with custom handlers or integrating with other libraries.
Single Page Applications (SPAs) with complex logic on both the client and server (e.g., dashboards, admin panels).
Next.js or Remix apps, where server-side logic and frontend components are tightly coupled.
Microservices or modular full-stack architectures, where backend services are built with shared types and typesafety is important.
External Clients: While tRPC is perfect for internal communication within a full-stack app, it may not be the best choice if you need to expose an API to external clients (e.g., mobile apps, third-party services), as it relies heavily on TypeScript for type safety, which external clients might not understand directly.
More Complexity for Small Projects: If you're working on a small project or MVP, using tRPC could add complexity, as it's a relatively new concept compared to traditional REST or GraphQL APIs.
tRPC is an excellent choice for full-stack applications because it provides a tight coupling between the frontend and backend, ensuring type safety, reducing boilerplate code, and simplifying API calls. However, if your project involves external consumers (e.g., mobile apps or third-party integrations), a more traditional approach like REST or GraphQL might be better suited.
If you're building a full-stack app with shared code and want a smooth development experience, tRPC is a great option!
Yes, it is possible to call typesafe tRPC functions from another repository, but there are some important considerations and steps you need to follow. The main challenge when doing this is that tRPC is designed for intra-application communication (e.g., between client and server in a monorepo or full-stack app). However, if you want to call tRPC functions from a different repository, you'll need to expose the tRPC server's functionality over HTTP or another transport layer, and then consume that API from the external repository.
Expose the tRPC API Over HTTP: To enable communication between two repositories, you’ll have to expose your tRPC functions via HTTP endpoints. This is typically done by integrating tRPC with a backend framework like Next.js (via API routes) or Express.
For example, if you're using Next.js for your backend, you would define your API route to handle the tRPC requests.
ts// pages/api/trpc/[trpc].ts (in your tRPC backend repo) import * as trpc from '@trpc/server/adapters/next'; import { appRouter } from '../../src/server/routers/appRouter'; export default trpc.createNextApiHandler({ router: appRouter, createContext: () => null, });
This exposes your
tRPC
functions at the endpoint
http://localhost:3000/api/trpc
.
Create a Client to Call the tRPC API: Now that your tRPC server is exposed via HTTP, the external repository can call the tRPC API as if it were any other HTTP API. You can use libraries like Axios or fetch to send requests to the tRPC server.
You can still use tRPC’s client-side functions if you want to maintain typesafety. However, you will need to create a client that communicates with your server's API.
Here’s how you can achieve that:
In the external repo, create a tRPC client that connects to the tRPC server over HTTP.
ts// trpcClient.ts (in your external repo) import { createTRPCClient } from '@trpc/client'; import { AppRouter } from 'your-ttrpc-server'; // Import the types from your tRPC server // Create a client that calls the tRPC server over HTTP const trpc = createTRPCClient<AppRouter>({ url: 'http://localhost:3000/api/trpc', // Replace with your server URL }); async function callGetUser() { const user = await trpc.query('getUser', '1'); // Call a tRPC procedure console.log(user); } callGetUser();
Sharing Types Between Repos: One of the best parts of tRPC is that it shares types between the client and server. To ensure that the external repo has the correct types, you should share the tRPC router types between the two repositories.
There are a few ways to do this:
Monorepo setup: If both repositories are part of a monorepo, you can share types directly by importing them from a shared package.
Package or NPM module: If the repositories are separate but still need to share types, you can create a shared package (or publish the types as a private npm module) that both repositories depend on. In the external repo, you can import the types as a dependency.
For example, in the internal repo (tRPC server), you might create a shared package or module with your types:
ts// shared-types/index.ts (shared module) export type AppRouter = typeof appRouter;
Then, in the external repo, you import these types:
tsimport { AppRouter } from 'shared-types'; // Import from shared types module
Call the tRPC API: Once you've set up the client in the external repo, you can call any of the tRPC methods as you would if you were on the same repository.
Authentication and Authorization: When calling the tRPC API from another repository, you may need to handle authentication and authorization, especially if your API is protected. You’ll need to ensure that the external repository sends appropriate authentication tokens (e.g., via headers).
CORS (Cross-Origin Resource Sharing): If the tRPC server is hosted on a different domain or port than the external repository, you’ll need to handle CORS by allowing the external domain in your tRPC server's configuration.
Error Handling: Since you're now making HTTP requests between different repos, you need to manage errors effectively (e.g., network errors, timeouts, server-side errors).
tRPC Backend: Exposes
getUser
and other functions over HTTP.
External Repo: Calls these functions over HTTP using a tRPC client.
Types: The types are shared between the repos (either through a shared package or monorepo structure).
Yes, you can call typesafe tRPC functions from another repo by exposing the tRPC functions over HTTP. You need to:
Set up a tRPC server that handles the API requests.
Create a tRPC client in the external repository to communicate with the server.
Share types between the two repositories (using a monorepo or shared packages).
This allows you to maintain typesafety while interacting with the tRPC functions across repositories.