Configuration

The geofront-server command takes a configuration file as required argument. The configuration is an ordinary Python script that defines the following required and optional variables. Note that all names have to be uppercase.

config.TEAM

(geofront.team.Team) The backend implementation for team authentication. For example, in order to authorize members of GitHub organization use GitHubOrganization implementation:

from geofront.backends.github import GitHubOrganization

TEAM = GitHubOrganization(
    client_id='GitHub OAuth app client id goes here',
    client_secret='GitHub OAuth app client secret goes here',
    org_login='your_org_name'  #  in https://github.com/your_org_name
)

Or you can implement your own backend by subclassing Team.

See also

Module geofront.team — Team authentication
The interface for team authentication.
Class geofront.backends.github.GitHubOrganization
The Team implementation for GitHub organizations.
Class geofront.backends.bitbucket.BitbucketTeam
The Team implementation for Bitbucket Cloud teams.
Class geofront.backends.stash.StashTeam
The Team implementation for Atlassian’s Bitbucket Server (which was Stash).
config.REMOTE_SET

(RemoteSet) The set of remote servers to be managed by Geofront. It can be anything only if it’s an mapping object. For example, you can hard-code it by using Python dict data structure:

from geofront.remote import Remote

REMOTE_SET = {
    'web-1': Remote('ubuntu', '192.168.0.5'),
    'web-2': Remote('ubuntu', '192.168.0.6'),
    'web-3': Remote('ubuntu', '192.168.0.7'),
    'worker-1': Remote('ubuntu', '192.168.0.25'),
    'worker-2': Remote('ubuntu', '192.168.0.26'),
    'db-1': Remote('ubuntu', '192.168.0.50'),
    'db-2': Remote('ubuntu', '192.168.0.51'),
}

Every key has to be a string, and every valye has to be an instance of Remote. Remote consits of an user, a hostname, and the port to SSH. For example,if you’ve ssh-ed to a remote server by the following command:

$ ssh -p 2222 ubuntu@192.168.0.50

A Remote object for it should be:

Remote('ubuntu', '192.168.0.50', 2222)

You can add more dynamism by providing custom dict-like mapping object. collections.abc.Mapping could help to implement it. For example, CloudRemoteSet is a subtype of Mapping, and it dynamically loads the list of available instance nodes in the cloud e.g. EC2 of AWS. Due to Apache Libcloud it can work with more than 20 cloud providers like AWS, Azure, or Rackspace.

from geofront.backends.cloud import CloudRemoteSet
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver

driver_cls = get_driver(Provider.EC2)
driver = driver_cls('access id', 'secret key', region='us-east-1')
REMOTE_SET = CloudRemoteSet(driver)

See also

Class geofront.remote.Remote
Value type that represents a remote server to ssh.
Class geofront.backends.cloud.CloudRemoteSet
The Libcloud-backed dynamic remote set.
Module collections.abc — Abstract Base Classes for Containers
This module provides abstract base classes that can be used to test whether a class provides a particular interface; for example, whether it is hashable or whether it is a mapping.
config.TOKEN_STORE

(werkzeug.contrib.cache.BaseCache) The store to save access tokens. It uses Werkzeug’s cache interface, and Werkzeug provides several built-in implementations as well e.g.:

For example, in order to store access tokens into Redis:

from werkzeug.contrib.cache import RedisCache

TOKEN_STORE = RedisCache(host='localhost', db=0)

Of course you can implement your own backend by subclassing BaseCache.

Although it’s a required configuration, but when -d/--debug is enabled, SimpleCache (which is all expired after geofront-server process terminated) is used by default.

See also

Cache — Werkzeug
Cache backend interface and implementations provided by Werkzeug.
config.KEY_STORE

(geofront.keystore.KeyStore) The store to save public keys for each team member. (Not the master key; don’t be confused with MASTER_KEY_STORE.)

If TEAM is a GitHubOrganization object, KEY_STORE also can be GitHubKeyStore. It’s an adapter class of GitHub’s per-account public key list.

from geofront.backends.github import GitHubKeyStore

KEY_STORE = GitHubKeyStore()

You also can store public keys into the database like SQLite, PostgreSQL, or MySQL through DatabaseKeyStore:

import sqlite3
from geofront.backends.dbapi import DatabaseKeyStore

KEY_STORE = DatabaseKeyStore(sqlite3,
                             '/var/lib/geofront/public_keys.db')

Some cloud providers like Amazon EC2 and Rackspace (Next Gen) support key pair service. CloudKeyStore helps to use the service as a public key store:

from geofront.backends.cloud import CloudKeyStore
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver

driver_cls = get_driver(Provider.EC2)
driver = driver_cls('api key', 'api secret key', region='us-east-1')
KEY_STORE = CloudKeyStore(driver)

New in version 0.2.0: Added DatabaseKeyStore class. Added CloudKeyStore class.

New in version 0.3.0: Added StashKeyStore class.

config.MASTER_KEY_STORE

(geofront.masterkey.MasterKeyStore) The store to save the master key. (Not public keys; don’t be confused with KEY_STORE.)

The master key store should be secure, and hard to lose the key at the same time. Geofront provides some built-in implementations:

FileSystemMasterKeyStore
It stores the master key into the file system as the name suggests. You can set the path to save the key. Although it’s not that secure, but it might help you to try out Geofront.
CloudMasterKeyStore
It stores the master key into the cloud object storage like S3 of AWS. It supports more than 20 cloud providers through the efforts of Libcloud.
from geofront.masterkey import FileSystemMasterKeyStore

MASTER_KEY_STORE = FileSystemMasterKeyStore('/var/lib/geofront/id_rsa')
config.PERMISSION_POLICY

(PermissionPolicy) The permission policy to determine which remotes are visible for each team member, and allowed them to SSH.

The default is DefaultPermissionPolicy, and it allows everyone in the team to view and access through SSH to all available remotes.

If your remote set has metadata for ACL i.e. group identifiers to allow you can utilize it through GroupMetadataPermissionPolicy.

If you need more subtle and complex rules for ACL you surely can implement your own policy by subclassing PermissionPolicy interface.

New in version 0.2.0.

config.MASTER_KEY_TYPE

(Type[PKey]) The type of the master key that will be generated. It has to be a subclass of paramiko.pkey.PKey:

RSA
paramiko.rsakey.RSAKey
ECDSA
paramiko.ecdsakey.ECDSAKey
DSA (DSS)
paramiko.dsskey.DSSKey

RSAKey by default.

New in version 0.4.0.

config.MASTER_KEY_BITS

(Optional[int]) The number of bits the generated master key should be. 2048 by default.

Changed in version 0.4.0: Since the appropriate MASTER_KEY_BITS depends on its MASTER_KEY_TYPE, the default value of MASTER_KEY_BITS became None (from 2048).

None means to follow MASTER_KEY_TYPE‘s own default (appropriate) bits.

New in version 0.2.0.

config.MASTER_KEY_RENEWAL

(datetime.timedelta) The interval of master key renewal. None means never. For example, if you want to renew the master key every week:

import datetime

MASTER_KEY_RENEWAL = datetime.timedelta(days=7)

A day by default.

config.TOKEN_EXPIRE

(datetime.timedelta) The time to expire each access token. As shorter it becomes more secure but more frequent to require team members to authenticate. So too short time would interrupt team members.

A week by default.

config.ENABLE_HSTS

(bool) Enable HSTS (HTTP strict transport security).

False by default.

New in version 0.2.2.

Example

# This is a configuration example.  See docs/config.rst as well.

# Scenario: Your team is using GitHub, and the organization login is @YOUR_TEAM.
# All members already registered their public keys to their GitHub accounts,
# and are using git through ssh public key authorization.

# First of all, you have to decide how to authorize team members.
# Geofront provides a built-in authorization method for GitHub organizations.
# It requires a pair of client keys (id and secret) for OAuth authentication.
# You can create one from:
#
# https://github.com/organizations/YOUR_TEAM/settings/applications/new
#
# Then import GitHubOrganization class, and configure a pair of client keys
# and your organization login name (@YOUR_TEAM in here).
from geofront.backends.github import GitHubOrganization

TEAM = GitHubOrganization(
   client_id='0123456789abcdef0123',
   client_secret='0123456789abcdef0123456789abcdef01234567',
   org_login='YOUR_TEAM'
)

# Your colleagues have already registered their public keys to GitHub,
# so you don't need additional storage for public keys.  We'd use GitHub
# as your public key store.
from geofront.backends.github import GitHubKeyStore

KEY_STORE = GitHubKeyStore()

# Unlike public keys, the master key ideally ought to be accessible by
# only Geofront.  Assume you use Amazon Web Services.  So you'll store
# the master key to the your private S3 bucket named your_team_master_key.
from geofront.backends.cloud import CloudMasterKeyStore
from libcloud.storage.types import Provider
from libcloud.storage.providers import get_driver

driver_cls = get_driver(Provider.S3)
driver = driver_cls('aws access key', 'aws secret key')
container = driver.get_container(container_name='your_team_master_key')
MASTER_KEY_STORE = CloudMasterKeyStore(driver, container, 'id_rsa')

# You have to let Geofront know what to manage remote servers.
# Although the list can be hard-coded in the configuration file,
# but you'll get the list dynamically from EC2 API.  Assume our all
# AMIs are Amazon Linux, so the usernames are always ec2-user.
# If you're using Ubuntu AMIs it should be ubuntu instead.
from geofront.backends.cloud import CloudRemoteSet
from libcloud.compute.types import Provider
from libcloud.compute.providers import get_driver

driver_cls = get_driver(Provider.EC2)
driver = driver_cls('aws access id', 'aws secret key', region='uest-east-1')
REMOTE_SET = CloudRemoteSet(driver, user='ec2-user')

# Suppose your team is divided by several subgroups, and these subgroups are
# represented in teams of the GitHub organization.  So you can control
# who can access each remote by specifying allowed groups to its metadata.
# CloudRemoteSet which is used for above REMOTE_SET exposes each EC2 instance's
# metadata as it has.  We suppose every EC2 instance has Allowed-Groups
# metadata key and its value is space-separated list of group slugs.
# The following settings will allow only members who belong to corresponding
# groups to access.
from geofront.remote import GroupMetadataPermissionPolicy

PERMISSION_POLICY = GroupMetadataPermissionPolicy('Allowed-Groups')

# Geofront provisions access tokens (or you can think them as sessions)
# for Geofront clients.  Assume you already have a Redis server running
# on the same host.  We'd store tokens to the db 0 on that Redis server
# in the example.
from werkzeug.contrib.cache import RedisCache

TOKEN_STORE = RedisCache(host='localhost', db=0)