4. WHERE clause → query params
Just simple operators like less, greater,
equality, inequality
Provide more complicated logic as views
5. Primary key → eq: param(s)
A primary key identifies at most a single row
They can be compound and/or natural
Routes like /foo/1 don’t always suffice
/foo?k1=eq.v1&k2=eq.v2
6. For instance,
if an employee belongs to a dept
Link: <http://me.com/dept?id=eq:7>; rel="dept"
Foreign keys → header links
11. schema search path → API Version
Use numerical schema names in the db
Set schema search path to mask endpoints and
fall through when desired
GET /foo HTTP/1.1
Accept: application/vnd.me.com+format; version=n
12. DB Roles → OAuth
Web server authenticates you
then connects as your user to db
DB authorizes your (hence server) actions
13. Column constraints → OPTIONS
We can pipe the options output into a client
side “Faker” to mock server responses for the
client test suite.
14. PG stats collector → cache headers
The access and modification patterns in a table
can provide a heuristic for http caching headers
Perhaps a special view could allow overriding
etags, max-age, etc for each table via a sql
expression
15. EXPLAIN → HTTP 413
The server can examine the query plan and
preemptively forbid inefficient requests
Like large seq scans or nested join loops
16. Deployment is easy
For instance on Heroku create a buildpack that
installs the server binary
Then push new migrations and run them. The
only way to update your api server is through
migrations.
17. Use robust migration tool
github.com / theory / squitch
• Follows a git-inspired workflow
• Models dependencies as a graph, not a line
• TDD at the SQL level!
• No conflicts between branches with git union
merge strategy (unlike schema.rb)
19. Efficiency! Use a fast language.
The web server itself doesn’t change so we
don’t need the convenience of a scripting
language.
Compile it once for your platform.
22. Generate JSON inside Postgres
Dockyard measured that Postgres’ internal
JSON generation was 2X/10X faster than
ActiveRecord Serializers for small data and
160X faster for large data.