内容简介:Listens to changes in a PostgreSQL Database and broadcasts them over websockets.This repo is still under heavy development and the documentation is evolving. You're welcome to try it, but expect some breaking changes. Watch "releases" of this repo to recei
Supabase Realtime
Listens to changes in a PostgreSQL Database and broadcasts them over websockets.
Contents
Status
- Alpha: Under heavy development
- Beta: Ready for use. But go easy on us, there may be a few kinks.
- 1.0: Use in production!
This repo is still under heavy development and the documentation is evolving. You're welcome to try it, but expect some breaking changes. Watch "releases" of this repo to receive a notifification when we are ready for Beta. And give us a star if you like it!
Example
import { Socket } = '@supabase/realtime-js' var socket = new Socket(process.env.REALTIME_URL) socket.connect() // Listen to only INSERTS on the 'users' table in the 'public' schema var allChanges = this.socket.channel('realtime:public:users') .join() .on('INSERT', payload => { console.log('Update received!', payload) }) // Listen to all changes from the 'public' schema var allChanges = this.socket.channel('realtime:public') .join() .on('*', payload => { console.log('Update received!', payload) }) // Listen to all changes in the database let allChanges = this.socket.channel('realtime:*') .join() .on('*', payload => { console.log('Update received!', payload) })
Introduction
What is this?
This is an Elixir server (Phoenix) that allows you to listen to changes in your database via websockets.
It works like this:
- the Phoenix server listens to PostgreSQL's replication functionality (using Postgres' logical decoding)
- it converts the byte stream into JSON
- it then broadcasts over websockets.
Cool, but why not just use Postgres' NOTIFY
?
A few reasons:
- You don't have to set up triggers on every table
- NOTIFY has a payload limit of 8000 bytes and will fail for anything larger. The usual solution is to send and ID then fetch the record, but that's heavy on the database
- This server consumes one connection to the database, then you can connect many clients to this server. Easier on your database, and to scale up you just add realtime servers
What are the benefits?
- The beauty of listening to the replication functionality is that you can make changes to your database from anywhere - your API, directly in the DB, via a console etc - and you will still receive the changes via websockets.
- Decoupling. For example, if you want to send a new slack message every time someone makes a new purchase you might build that funcitonality directly into your API. This allows you to decouple your async functionality from your API.
- This is built with Phoenix, an extremely scalable Elixir framework
What can I build with this?
- Chat applications
- Games
- Live dashboards
- Connectors - sending events to queues etc
- Streaming analytics
Quick start
If you just want to start it up and see it in action:
docker-compose up http://localhost:3000
Getting Started
Client
Install the client library
npm install --save @supabase/realtime-js
Set up the socket
import { Socket } = '@supabase/realtime-js' const REALTIME_URL = process.env.REALTIME_URL || 'http://localhost:4000' var socket = new Socket(REALTIME_URL) socket.connect()
You can listen to these events on each table:
const EVENTS = { EVERYTHING: '*', INSERT: 'INSERT', UPDATE: 'UPDATE', DELETE: 'DELETE' }
Example 1: Listen to all INSERTS, on your users
table
var allChanges = this.socket.channel('realtime:public:users') .join() .on(EVENTS.INSERT, payload => { console.log('Record inserted!', payload) })
Example 2: Listen to all UPDATES in the public
schema
var allChanges = this.socket.channel('realtime:public') .join() .on(EVENTS.UPDATE, payload => { console.log('Update received!', payload) })
Example 3: Listen to all INSERTS, UPDATES, and DELETES, in all schemas
let allChanges = this.socket.channel('realtime:*') .join() .on(EVENTS.EVERYTHING, payload => { console.log('Update received!', payload) })
Server
Database set up
There are a some requirements for your database
- It must be Postgres 10+ as it uses logical replication
- Set up your DB for replication
- it must have the
wal_level
set to logical. You can check this by runningSHOW wal_level;
. To set thewal_level
, you can callALTER SYSTEM SET wal_level = logical;
- You must set
max_replication_slots
to at least 1:ALTER SYSTEM SET max_replication_slots = 5;
- it must have the
- Create a
PUBLICATION
for this server to listen to:CREATE PUBLICATION supabase_realtime FOR ALL TABLES;
- [OPTIONAL] If you want to recieve the old record (previous values) on UDPATE and DELETE, you can set the
REPLICA IDENTITY
toFULL
like this:ALTER TABLE your_table REPLICA IDENTITY FULL;
. This has to be set for each table unfortunately.
Server set up
The easiest way to get started is just to use our docker image. We will add more deployment methods soon.
# Update the environment variables to point to your own database docker run \ -e DB_HOST='docker.for.mac.host.internal' \ -e DB_NAME='postgres' \ -e DB_USER='postgres' \ -e DB_PASSWORD='postgres' \ -e DB_PORT=5432 \ -e PORT=4000 \ -e HOSTNAME='localhost' \ -e SECRET_KEY_BASE='SOMETHING_SUPER_SECRET' \ -p 4000:4000 \ supabase/realtime
Contributing
- Fork the repo on GitHub
- Clone the project to your own machine
- Commit changes to your own branch
- Push your work back up to your fork
- Submit a Pull request so that we can review your changes and merge
Releases
to trigger a release you must tag the commit, then push to origin
git tag -a 7.x.x -m "some stuff about the release" git push origin 7.x.x
License
This repo is liscenced under Apache 2.0.
Credits
- https://github.com/phoenixframework/phoenix - The server is built with the amazing elixir framework.
- https://github.com/cainophile/cainophile - A lot of this implementation leveraged the work already done on Cainophile.
- https://github.com/mcampa/phoenix-channels - The client library is ported from this library.
以上就是本文的全部内容,希望对大家的学习有所帮助,也希望大家多多支持 码农网
猜你喜欢:本站部分资源来源于网络,本站转载出于传递更多信息之目的,版权归原作者或者来源机构所有,如转载稿涉及版权问题,请联系我们。