Commit ae1e9385 by Szeberényi Imre

Initial version, installiong orig CIRCLE

parents
*.swp
*.swo
*~
## Git backend tutorial
### 1. Install salt master and register minion
```bash
sudo apt-get install salt-minion
sudo apt-get install salt-master
```
#### Edit /etc/salt/minion to set master to 127.0.0.1
#### Open 4505 and 4506 with ufw allow.
#### Restart master and minion. Accept minion key with salt-key -A.
#### Use -l debug option to show debug messages.
### 2. Install pygit2
#### 2.1 Without adding the repo, installing pygit is a bit difficult. Use this:
```bash
sudo add-apt-repository ppa:dennis/python
sudo apt-get update
sudo apt-get install python-pygit2
```
#### 2.2 Copy keys.
### 3. Modify the *master* config file:
```yaml
fileserver_backend:
- git
gitfs_remotes:
- git@git.ik.bme.hu:circle/salt.git:
- pubkey: /root/.ssh/git.pub
- privkey: /root/.ssh/git
pillar_roots:
base:
- /home/cloud/salt/pillar
gitfs_root: salt
```
### 4. Clone pillar to /home/cloud/
```bash
git clone `https://git.ik.bme.hu/circle/salt.git
```
### 5. Finish: call salt '*' state.sls allinone (or whatever you need)
### the *master* config file:
#### The default git provider is pygit2. You can change that to dulwich ot gitpython.
```yaml
gitfs_provider: dulwich
```
#### Include git in the fileserver_backend list:
```yaml
fileserver_backend:
- git
```
#### Specify one or more git://, https://, file://, or ssh:// URLs in gitfs_remotes to configure which repositories to cache and search for requested files:
```yaml
gitfs_remotes:
- git@git.ik.bme.hu:circle/salt.git
```
> The gitfs_remotes option accepts an ordered list of git remotes to cache and search, in listed order, for requested files.
#### Serving from subdirectory
```yaml
gitfs_root: foo/baz
```
#### Other options
Its possible to change branches, enviroments
Change branch:
```yaml
gitfs_base: salt-base
```
### Tutorial with more information:
http://docs.saltstack.com/en/latest/topics/tutorials/gitfs.html
### Local gitfs issue:
https://github.com/saltstack/salt/issues/6660
# CIRCLE Project - Salt Installer
## OS Support
* Red Hat Linux family:
* Red Hat Enterprise Linux 7+
* Cent OS 7+
* Scientific Linux 7+
* Debian Linux family:
* Debian linux 8+
* Ubuntu linux 14.04 LTS
## Prerequisites
### Red Hat family
Install EPEL repository (if the link is broken, please contact us):
```bash
sudo rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
```
Install some important packages:
```bash
sudo yum install python2-pip gcc vim git
```
### Debian family
Install some important packages:
```bash
sudo apt-get update
sudo apt-get install python-pip vim git
```
## Install Salt
```bash
sudo pip install salt==2014.7.1
```
## Get the installer
Clone circle installer git repository into cloud home
```bash
git clone https://git.ik.bme.hu/circle/salt.git
```
## Change variables
Modify installer.sls file
```
vim salt/pillar/installer.sls
```
Most used variables
-------------------
* **proxy_secret**: This is used to provide cryptographic signing, and should be set to a unique, unpredictable value.
* **secret_key**: This is used to provide cryptographic signing, and should be set to a unique, unpredictable value.
* **deployment_type**: local (development) or production
* **admin_user**: user name to login in as admin on the site
* **admin_pass**: password to login in as admin on the site
* **database**:
* **password**: database user’s password
* **amqp**:
* **password**: amqp user’s password
* **host**: amqp server IP - usually runs at localhost
* **graphite**:
* **password**: graphite user’s password
* **host**: graphite server IP - usually runs at localhost
* **nfs**:
* **enabled**: nfs is enabled
* **server**: nfs server’s hostname
* **network**: nfs server’s network to access files
* **directory**: this directory will be shared
* **storagedriver**:
* **queue_name**: the server’s hostname
* **fwdriver**:
* **queue_name**: the server’s hostname
* **gateway**: the server’s gateway
* **external_net**: the server’s network
* **external_if**: the server’s network interface
Other variables
---------------
* user: user who will install the software
* time zone: the server’s time zone, format is region/city
* amqp:
* user: amqp user
* port: amqp server’s port
* vhost: virtual host - specifies the namespace for entities (exchanges and queues) referred to by the protocol
* agent:
* repo_revision: revision
* agentdriver:
* repo_revision: revision
* cache: cache url - usually pylibmc://127.0.0.1:11211/
* database:
* name: django database’s name
* user: database user
* fwdriver:
* repo_revision: revision
* user: fwdriver user name
* vm_if: vm interface
* vm_et: vm network
* management_if: management interface
* reload_firewall_timeout: timeout for synchronous firewall reload
* graphite:
* user: graphite user
* port: graphite server’s port
* secret_key: graphite’s secret key
* manager:
* repo_revision: revision
* monitor-client:
* repo_revision: revision
* storage-driver:
* repo_revision: revision
* vm-driver:
* repo_revision: revision
* vnc-driver:
* repo_revision: revision
## Install Circle
Run the following installation command:
```bash
sudo salt-call state.sls allinone --local --file-root=/home/$USER/salt/salt --pillar-root=/home/$USER/salt/pillar
```
After this finished, you have to get "Failed: 0" message.
If installer fails, please visit the [Troubleshooting](#troubleshooting) paragraph.
After install, delete agent.conf or agent.service file:
If you have upstart:
```bash
sudo rm -f /etc/init/agent.conf
```
Or if you have systemd:
```bash
sudo rm -f /etc/systemd/system/agent.service
```
## Quickstart - Standalone Node
### Login
Log in into the Circle website with admin (the site is accessable on the 443 port). Name and password is in the `salt/pillar/installer.sls`.
### Create Node
To run virtual machines, we need to create nodes - and add to the system. Click on the new icon in the dashboard, Nodes menu.
#### Configure Node
To standalone configuration, type the current machine's hostname to Host/name, MAC address to Host/MAC, IP to HOST/IP. Choose managed-vm as VLAN.
#### Activate Node
Click on the 'Activate' icon to use the Node.
### Start Virtual Machine
To create new Virtual Machine, we use Templates - images based on previously saved VMs. Currently we haven't got any template - so let's create a new one. Click on Templates/new icon and choose 'Create a new base VM without disk'.
#### Configure Template
Set name, CPU and RAM settings, architecture. Check in the boot menu box, select network and lease, write down, which operating system will you use. Finally, create a template.
> The rows marked with astersk need to be filled.
![configure standalone node](_static/images/configure_node.jpg)
#### Add disk
Currently we don't have any disks attached to our VM. To add, click on the Resources menu, 'create disk' icon, set the name and size.
![disk setup](_static/images/disk.jpg)
#### Attach ISO
To install an OS, we can use ISO images, to boot from. Click on 'download disk' and type the ISO's URL.
![download iso](_static/images/iso.jpg)
### Start Virtual Machine
Finally, we can run the machine. Click on 'deploy' and start it. You can choose, on which node do you want to run.
![ubuntu 14.04](_static/images/ubuntu.png)
## Troubleshooting ##
### Portal won't load
Maybe port 443 is closed. Check and open it.
### Portal won't load on Ubuntu 14.04
```bash
sudo service nginx restart
```
### Cannot reach the internet on VM-s on distro from Red Hat family
```bash
sudo systemctl restart systemd-sysctl
```
#!/bin/sh
if [ $(id -u) -ne 0 ]; then
RED_UNDERLINED='\033[4;31m'
NC='\033[0m' # No Color
echo -e $RED_UNDERLINED"Please run as root or use sudo!"$NC
exit
fi
FULLPATH=$(readlink -m $0)
PREFIX=$(dirname $FULLPATH)
pip install -r $PREFIX/requirements.txt
$PREFIX/kvm-ok > /dev/null
retv=$?
EXTRAPARAMS=""
if [ $retv -eq 0 ]; then
EXTRAPARAMS="--kvm-present"
fi
python $PREFIX/install.py $EXTRAPARAMS $@
import salt.client
from salt import config
from salt.log.setup import setup_console_logger
from os.path import join, abspath, dirname
from netifaces import ifaddresses, gateways, AF_INET
from netaddr import IPNetwork
import socket
import yaml
import random
import os
import getpass
from halo import Halo
import argparse
PREFIX = dirname(__file__)
def get_timezone():
localtime = '/etc/localtime'
try:
zonefile = abspath(os.readlink(localtime))
zone_parts = zonefile.split('/')
return join(zone_parts[-2], zone_parts[-1])
except Exception:
return 'Europe/Budapest'
def get_gateway():
return gateways()['default'][AF_INET]
def get_default_gw():
return get_gateway()[0]
def get_interface():
return get_gateway()[1]
def get_ip_with_mask(intf):
ip = ifaddresses(intf)[AF_INET][0]
return str(IPNetwork(join(ip['addr'], ip['netmask'])))
def get_hostname():
return str(socket.gethostname().split('.')[0])
def print_warning(text):
RED_UNDERLINED = '\033[4;31m'
NC = '\033[0m' # No Color
print(RED_UNDERLINED + text + NC)
def input_password_with_retype():
pw = getpass.getpass("Enter admin password:")
if len(pw) == 0:
print_warning('Please enter a non-empty password!')
return ('', False)
pw2 = getpass.getpass("Retype password:")
status = pw == pw2
if not status:
print_warning('The passwords are different.')
return (pw, status)
def input_admin_password():
pw, status = input_password_with_retype()
while not status:
pw, status = input_password_with_retype()
return pw.encode('utf8')
def yaml_pretty_dump(data, file, **extra):
yaml.dump(data, file, encoding='utf-8', default_flow_style=False, **extra)
def dump_errors(result):
# Filter errors only
errors = {}
for key, data in result.iteritems():
if not data['result']:
errors[key] = data
with open(join(PREFIX, 'errors.yml'), 'w') as f:
yaml_pretty_dump(errors, f)
class KeyStore:
""" Loads, stores, generates, and saves secret keys """
def __init__(self, keyfile):
self.keyfile = keyfile
self.data = {}
try:
with open(keyfile) as f:
self.data = yaml.safe_load(f)
except Exception:
pass
def gen_key(self, length):
s = "abcdefghijklmnopqrstuvwxyz01234567890ABCDEFGHIJKLMNOPQRSTUVWXYZ"
return "".join(random.sample(s, length))
def get_key(self, name):
key = self.data.get(name)
if key is None:
key = self.gen_key(16)
self.data[name] = key
return key
def save(self):
with open(self.keyfile, 'w') as f:
yaml.dump(self.data, f)
parser = argparse.ArgumentParser()
parser.add_argument('--kvm-present', action='store_true',
help='Installs with KVM hypervisor otherwise with QEMU.')
parser.add_argument('--dev', action='store_true',
help='Installs Develpment version')
parser.add_argument('--local', action='store_true',
help='Installs Develpment version')
args = parser.parse_args()
if args.dev or args.local:
deployment_type = 'local'
else:
deployment_type = 'production'
KEYFILE = join(PREFIX, '.circlekeys')
ks = KeyStore(KEYFILE)
installer_sls = {
'user': 'cloud',
'proxy_secret': ks.get_key('proxy_secret'),
'secret_key': ks.get_key('secret_key'),
'timezone': get_timezone(),
'deployment_type': deployment_type,
'admin_user': 'admin',
'admin_pass': input_admin_password(),
'database': {
'name': 'circle',
'user': 'circle',
'password': ks.get_key('database_password'),
},
'amqp': {
'user': 'cloud',
'password': ks.get_key('amqp_password'),
'host': '127.0.0.1',
'port': 5672,
'vhost': 'circle',
},
'graphite': {
'user': 'monitor',
'password': ks.get_key('graphite_password'),
'host': '127.0.0.1',
'port': 5672,
'vhost': 'monitor',
'queue': 'monitor',
'secret_key': ks.get_key('graphite_secret_key'),
},
'cache': 'pylibmc://127.0.0.1:11211/',
'nfs': {
'enabled': True,
'server': '127.0.0.1',
'network': '127.0.0.0/8',
'directory': '/datastore',
},
'storagedriver': {
'queue_name': get_hostname(),
},
'fwdriver': {
'gateway': get_default_gw().encode('utf-8'),
'external_if': get_interface().encode('utf-8'),
'external_net': get_ip_with_mask(get_interface()).encode('utf-8'),
'queue_name': get_hostname(),
'management_if': 'ethy',
'trunk_if': 'linkb',
},
'vmdriver': {
'hypervisor_type': 'kvm' if args.kvm_present else 'qemu',
},
}
ks.save() # Save secret keys
# Make installer.sls
INSTALLERT_SLS = join(PREFIX, 'pillar/installer.sls')
with open(INSTALLERT_SLS, 'w') as f:
yaml_pretty_dump(installer_sls, f)
# NOTE: default logfile is '/var/log/salt/minion'
opts = config.minion_config('')
opts['file_client'] = 'local'
# NOTE: False will cause salt to only display output
# for states that failed or states that have changes
opts['state_verbose'] = False
opts['file_roots'] = {'base': [join(PREFIX, 'salt')]}
opts['pillar_roots'] = {'base': [join(PREFIX, 'pillar')]}
setup_console_logger(log_level='info')
caller = salt.client.Caller(mopts=opts)
# Run install with salt
with Halo(text='Installing', spinner='dots'):
result = caller.function('state.sls', 'allinone', with_grains=True)
# Count errors and print to console
error_num = 0
for key, data in result.iteritems():
if not data['result']:
print('Error in state: %s' % key)
error_num += 1
if error_num == 0:
print('Succesfully installed!')
else:
print_warning('%i error occured during install!' % error_num)
dump_errors(result)
#!/bin/sh
#
# kvm-ok - check whether the CPU we're running on supports KVM acceleration
# Copyright (C) 2008-2010 Canonical Ltd.
#
# Authors:
# Dustin Kirkland <kirkland@canonical.com>
# Kees Cook <kees.cook@canonical.com>
#
# This program is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License version 3,
# as published by the Free Software Foundation.
#
# This program is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with this program. If not, see <http://www.gnu.org/licenses/>.
set -e
assert_root() {
if [ "$(id -u)" != "0" ]; then
echo "INFO: For more detailed results, you should run this as root"
echo "HINT: sudo $0"
exit 1
fi
}
verdict() {
# Print verdict
if [ "$1" = "0" ]; then
echo "KVM acceleration can be used"
exit 0
else
echo "KVM acceleration can NOT be used"
exit 1
fi
}
# check cpu flags for capability
virt=$(egrep -m1 -w '^flags[[:blank:]]*:' /proc/cpuinfo | egrep -wo '(vmx|svm)') || true
[ "$virt" = "vmx" ] && brand="intel"
[ "$virt" = "svm" ] && brand="amd"
if [ -z "$virt" ]; then
echo "INFO: Your CPU does not support KVM extensions"
assert_root
verdict 1
fi
# Now, check that the device exists
if [ -e /dev/kvm ]; then
echo "INFO: /dev/kvm exists"
verdict 0
else
echo "INFO: /dev/kvm does not exist"
echo "HINT: sudo modprobe kvm_$brand"
fi
assert_root
# Prepare MSR access
msr="/dev/cpu/0/msr"
if [ ! -r "$msr" ]; then
modprobe msr
fi
if [ ! -r "$msr" ]; then
echo "You must be root to run this check." >&2
exit 2
fi
echo "INFO: Your CPU supports KVM extensions"
disabled=0
# check brand-specific registers
if [ "$virt" = "vmx" ]; then
BIT=$(rdmsr --bitfield 0:0 0x3a 2>/dev/null || true)
if [ "$BIT" = "1" ]; then
# and FEATURE_CONTROL_VMXON_ENABLED_OUTSIDE_SMX clear (no tboot)
BIT=$(rdmsr --bitfield 2:2 0x3a 2>/dev/null || true)
if [ "$BIT" = "0" ]; then
disabled=1
fi
fi
elif [ "$virt" = "svm" ]; then
BIT=$(rdmsr --bitfield 4:4 0xc0010114 2>/dev/null || true)
if [ "$BIT" = "1" ]; then
disabled=1
fi
else
echo "FAIL: Unknown virtualization extension: $virt"
verdict 1
fi
if [ "$disabled" -eq 1 ]; then
echo "INFO: KVM ($virt) is disabled by your BIOS"
echo "HINT: Enter your BIOS setup and enable Virtualization Technology (VT),"
echo " and then hard poweroff/poweron your system"
verdict 1
fi
verdict 0
agent:
repo_name: https://git.ik.bme.hu/circle/agent.git
repo_revision: master
agentdriver:
repo_name: https://git.ik.bme.hu/circle/agentdriver.git
repo_revision: master
amqp:
user: cloud
password: password
host: localhost
port: 5672
vhost: circle
graphite:
user: monitor
password: monitor
host: localhost
port: 5672
vhost: monitor
queue: monitor
user: cloud
cache: pylibmc://127.0.0.1:11211/
proxy_secret: xooquageire7uX1
secret_key: Ga4aex3Eesohngo
timezone: Europe/Budapest
deployment_type: local
admin_user: admin
admin_pass: 4j23oihreehfd
database:
name: circle
user: circle
password: hoGei6paiN0ieda
graphite:
secret_key: ahf2aim7ahLeo8n
nfs:
enabled: false
server: localhost
network: 192.168.1.0/24
directory: /datastore
dnsdriver:
repo_name: https://git.ik.bme.hu/circle/dnsdriver.git
repo_revision: master
dns_db_dir: /var/lib/circle/dnsdriver
address_list: 127.0.0.1
fwdriver:
repo_name: https://git.ik.bme.hu/circle/fwdriver.git
repo_revision: master
user: fw
queue_name: cloud
vm_if: vm
vm_net: 192.168.2.254/24
vm_net_ip: 192.168.2.254
vm_net_mask: 255.255.255.0
management_if: eth5
management_net: 192.168.1.254/24
external_if: eth0
external_net: 10.0.0.97/16
gateway: 10.0.255.254
reload_firewall_timeout: 120
#user: cloud
#proxy_secret: xooquageire7uX1
#secret_key: Ga4aex3Eesohngo
#timezone: Europe/Budapest
#deployment_type: local
#admin_user: admin
#admin_pass: 4j23oihreehfd
#database:
# name: circle
# user: circle
# password: hoGei6paiN0ieda
#amqp:
# user: cloud
# password: password
# host: 127.0.0.1
# port: 5672
# vhost: circle
#graphite:
# user: monitor
# password: monitor
# host: 127.0.0.1
# port: 5672
# vhost: monitor
# queue: monitor
# secret_key: ahf2aim7ahLeo8n
#cache: pylibmc://127.0.0.1:11211/
#nfs:
# enabled: true
# server: 10.0.0.115
# network: 192.168.1.0/24
# directory: /datastore
#storagedriver:
# queue_name: cloud-6605
#fwdriver:
# queue_name: cloud-6605
# gateway: 10.0.255.254
# external_net: 10.0.0.115/16
# external_if: eth0
# trunk_if: linkb
# management_if: ethy
manager:
repo_name: https://git.ik.bme.hu/circle/cloud.git
repo_revision: master
monitor-client:
repo_name: https://git.ik.bme.hu/circle/monitor-client.git
repo_revision: master
storagedriver:
repo_name: https://git.ik.bme.hu/circle/storagedriver.git
repo_revision: master
queue_name: storageserver
base:
'*':
- vmdriver
- amqp
- agentdriver
- agent
- storagedriver
- vncproxy
- monitor-client
- vmdriver
- firewall
- manager
- common
- dnsdriver
- installer
vmdriver:
repo_name: https://git.ik.bme.hu/circle/vmdriver.git
repo_revision: master
hypervisor_type: kvm
vncproxy:
repo_name: https://git.ik.bme.hu/circle/vncproxy.git
repo_revision: master
salt==2014.7.1
netaddr==0.7.14
netifaces==0.10.6
halo==0.0.7
/home/{{ pillar['user'] }}/.virtualenvs/agentdriver/bin/postactivate:
file.managed:
- source: salt://agentdriver/files/postactivate
- template: jinja
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 700
/etc/incron.d/agentdriver:
file.managed:
- source: salt://agentdriver/files/agentdriver.incron
- template: jinja
- user: root
- group: root
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/agentdriver.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/agentdriver/miscellaneous/agentdriver.service
{% else %}
/etc/init/agentdriver.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/agentdriver/miscellaneous/agentdriver.conf
{% endif %}
{% if grains['os_family'] == 'RedHat' %}
incrond:
{% else %}
incron:
{% endif %}
service:
- full_restart: true
- enable: true
- running
- watch:
- file: /etc/incron.d/agentdriver
/var/lib/libvirt/serial IN_CREATE setfacl -m u:{{ pillar['user'] }}:rw $@/$#
export AMQP_URI=amqp://{{ pillar['amqp']['user'] }}:{{ pillar['amqp']['password'] }}@{{ pillar['amqp']['host'] }}:{{ pillar['amqp']['port'] }}/{{ pillar['amqp']['vhost'] }}
export CACHE_URI={{ pillar['cache'] }}
include:
- common
gitrepo_agentdriver:
git.latest:
- name: {{ pillar['agentdriver']['repo_name'] }}
- rev: {{ pillar['agentdriver']['repo_revision'] }}
- target: /home/{{ pillar['user'] }}/agentdriver
- user: {{ pillar['user'] }}
- require:
- pkg: git
include:
- agentdriver.gitrepo
- agentdriver.virtualenv
- agentdriver.configuration
agentdriver:
pkg.installed:
- pkgs:
- git
- ntp
- incron
{% if grains['os_family'] == 'RedHat' %}
- python2-pip
- libmemcached-devel
- python-devel
- python-virtualenvwrapper
- zlib-devel
{% else %}
- libmemcached-dev
- python-dev
- virtualenvwrapper
- python-pip
- zlib1g-dev
{% endif %}
- require_in:
- git: gitrepo_agentdriver
- virtualenv: virtualenv_agentdriver
user:
- present
- name: {{ pillar['user'] }}
- gid_from_name: True
- shell: /bin/bash
- groups:
{% if grains['os_family'] == 'RedHat' %}
- wheel
{% else %}
- sudo
{% endif %}
- require_in:
- git: gitrepo_agentdriver
- virtualenv: virtualenv_agentdriver
virtualenv_agentdriver:
virtualenv.managed:
- name: /home/{{ pillar['user'] }}/.virtualenvs/agentdriver
- requirements: /home/{{ pillar['user'] }}/agentdriver/requirements.txt
- user: {{ pillar['user'] }}
- no_chown: true
include:
- profile
- agentdriver
- manager
- graphite
- monitor-client
- storagedriver
- vmdriver
- vncproxy
- fwdriver
- network
git:
pkg.installed
/var/lib/circle/dnsdriver:
file.directory:
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
- makedirs: True
/var/lib/circle/dnsdriver/makefile:
file.managed:
- source: salt://dnsdriver/files/makefile
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
/etc/systemd/system/dnscelery.service:
file.managed:
- source: salt://dnsdriver/files/dnscelery.service
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
- template: jinja
/home/{{ pillar['user'] }}/.virtualenvs/dnsdriver/bin/postactivate:
file.managed:
- source: salt://dnsdriver/files/postactivate
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
- template: jinja
- require:
- virtualenv: virtualenv_dnsdriver
tinydns_conf:
file.managed:
- name: /etc/ndjbdns/tinydns.conf
- source: salt://dnsdriver/files/tinydns.conf
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
- template: jinja
[Unit]
Description=DNS driver
Wants=network.target
After=network.target
[Service]
User={{ pillar["user"] }}
Group={{ pillar["user"] }}
KillSignal=SIGTERM
TimeoutStopSec=600
Restart=always
WorkingDirectory=/home/{{ pillar["user"] }}/dnsdriver
ExecStart=/bin/bash -c "source /home/{{ pillar["user"] }}/.virtualenvs/dnsdriver/bin/activate; source /home/{{ pillar["user"] }}/.virtualenvs/dnsdriver/bin/postactivate; exec celery -A dnscelery worker --loglevel=info -n $(/bin/hostname -s).dns"
[Install]
WantedBy=multi-user.target
default:
tinydns-data
export AMQP_URI=amqp://{{ pillar['amqp']['user'] }}:{{ pillar['amqp']['password'] }}@{{ pillar['amqp']['host'] }}:{{ pillar['amqp']['port'] }}/{{ pillar['amqp']['vhost'] }}
export DNS_DB_DIR={{ pillar['dnsdriver']['dns_db_dir'] }}
# extra paramaters for dnscelery
#export EXTRA=
#
# tinydns.conf: this file is part of the djbdns project.
#
# Here we define some variables vital for running tinydns.
#
# Things to remember:
#
# - Lines starting with `#' are comments, thus ignored.
# - Blank lines are blank, thus ignored.
# - Do not leave blank spaces around `=' sign while defining a variable.
#
# Maximum number of bytes that could be allocated if required.
#
DATALIMIT=300000
# No of bytes to allocate for the cache. This may not exceed DATALIMIT
#
# CACHESIZE=100000
# Address to listen on for incoming connections. It could be comma separated
# list of IP addresses.
#
# IP=127.0.0.1[,x.x.x.x,...]
#
IP={{ pillar['dnsdriver']['address_list'] }}
# Address to use while sending out-going requests. 0.0.0.0 means machines
# primary IP address.
#
# IPSEND=0.0.0.0
# A non-root user whose privileges should be acquired by tinydns.
# Default: daemon
# See: $ id -u daemon
#
UID=2
# A non-root group whose privileges should be acquired by tinydns.
# Default: daemon
# See: $ id -g daemon
#
GID=2
# ROOT: is the new root & working directory for tinydns.
# Obviously, the user whose ID is mentioned above MUST be able to read from
# this directory.
#
# Also, this is where `data' and `data.cdb' files should reside.
#
ROOT={{ pillar['dnsdriver']['dns_db_dir'] }}
# If HIDETTL is set, tinydns always uses a TTL of 0 in its responses.
#
# HIDETTL=
# If FORWARDONLY is set, tinydns treats the servers/roots as a list of IP
# addresses for other caches, not root servers. It forwards queries to those
# caches the same way a client does, rather than contacting a chain of servers
# according to NS records.
#
# FORWARDONLY=
# If DEBUG_LEVEL is set, tinydns displays helpful debug messages to
# the console.
#
DEBUG_LEVEL=1
include:
- common
gitrepo_dnsdriver:
git.latest:
- name: {{ pillar['dnsdriver']['repo_name'] }}
- rev: {{ pillar['dnsdriver']['repo_revision'] }}
- target: /home/{{ pillar['user'] }}/dnsdriver
- user: {{ pillar['user'] }}
- require:
- pkg: git
include:
- dnsdriver.gitrepo
- dnsdriver.virtualenv
- dnsdriver.configuration
dnsdriver:
pkg.installed:
- pkgs:
- ndjbdns
- make
- python-virtualenvwrapper
- require_in:
- virtualenv: virtualenv_dnsdriver
- file: tinydns_conf
dnscelery:
service.running:
- enable: True
- watch:
- pkg: dnsdriver
- sls: dnsdriver.gitrepo
- sls: dnsdriver.virtualenv
- sls: dnsdriver.configuration
tinydns:
service.running:
- enable: True
- watch:
- pkg: dnsdriver
- sls: dnsdriver.gitrepo
- sls: dnsdriver.virtualenv
- sls: dnsdriver.configuration
virtualenv_dnsdriver:
virtualenv.managed:
- name: /home/{{ pillar['user'] }}/.virtualenvs/dnsdriver
- requirements: /home/{{ pillar['user'] }}/dnsdriver/requirements.txt
- user: {{ pillar['user'] }}
- no_chown: true
- require:
- git: gitrepo_dnsdriver
include:
- openvswitch
/home/{{ pillar['fwdriver']['user'] }}/.virtualenvs/fw/bin/postactivate:
file.managed:
- source: salt://fwdriver/files/postactivate
- template: jinja
- user: {{ pillar['fwdriver']['user'] }}
- group: {{ pillar['fwdriver']['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/firewall.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['fwdriver']['user'] }}/fwdriver/miscellaneous/firewall.service
/etc/systemd/system/firewall-init.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://fwdriver/files/firewall-init.service
{% else %}
/etc/init/firewall.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['fwdriver']['user'] }}/fwdriver/miscellaneous/firewall.conf
/etc/init/firewall-init.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['fwdriver']['user'] }}/fwdriver/miscellaneous/firewall-init.conf
{% endif %}
/etc/dhcp:
file.directory:
- mode: 755
/etc/dhcp/dhcpd.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://fwdriver/files/dhcpd.conf
/etc/dhcp/dhcpd.conf.generated:
file.managed:
- user: {{ pillar['fwdriver']['user'] }}
- group: {{ pillar['fwdriver']['user'] }}
{% if grains['os_family'] != 'RedHat' and grains['os'] != 'Debian' %}
/etc/init.d/isc-dhcp-server:
file.symlink:
- target: /lib/init/upstart-job
- force: True
{% endif %}
/etc/sysctl.d/60-circle-firewall.conf:
file.managed:
- user: root
- group: root
- contents: "net.ipv4.ip_forward=1\nnet.ipv6.conf.all.forwarding=1"
/etc/sudoers.d/fwdriver:
file.managed:
- user: root
- group: root
- mode: 400
- template: jinja
- source: salt://fwdriver/files/sudoers
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
systemd-sysctl:
cmd.run:
- name: /bin/systemctl restart systemd-sysctl
service.running:
- watch:
- file: /etc/sysctl.d/60-circle-firewall.conf
- require:
- cmd: systemd-sysctl
{% endif %}
{% if grains['os_family'] == 'RedHat' %}
/root/firewall-init.te:
file.managed:
- source: salt://fwdriver/files/firewall-init.te
- template: jinja
- mode: 644
firewall-init_semodule:
cmd.run:
- cwd: /root
- user: root
- name: checkmodule -M -m -o firewall-init.mod firewall-init.te; semodule_package -o firewall-init.pp -m firewall-init.mod; semodule -i firewall-init.pp
- unless: semodule -l |grep -qs ^firewall-init
- require:
- file: /root/firewall-init.te
{% endif %}
ddns-update-style none;
default-lease-time 60000;
max-lease-time 720000;
log-facility local7;
include "/etc/dhcp/dhcpd.conf.generated";
[Unit]
Description=CIRCLE firewall init
After=network.target
#Before=firewall.service
[Service]
User=root
Group=root
Type=oneshot
ExecStart=/bin/bash -c "/bin/systemctl restart openvswitch{%if grains['os']=='Debian'%}-switch{%endif%} ; /sbin/ip netns add fw || true; ovs-vsctl del-br firewall || true; /sbin/ip netns exec fw sysctl -f /etc/sysctl.d/60-circle-firewall.conf ; /sbin/ip netns exec fw ip link set lo up"
[Install]
WantedBy=multi-user.target
module firewall-init 1.0;
require {
type ifconfig_t;
type ifconfig_var_run_t;
type virtio_device_t;
type root_t;
class dir mounton;
class chr_file { read write };
}
#============= ifconfig_t ==============
#!!!! This avc is allowed in the current policy
allow ifconfig_t ifconfig_var_run_t:dir mounton;
#!!!! This avc is allowed in the current policy
allow ifconfig_t root_t:dir mounton;
#!!!! This avc is allowed in the current policy
allow ifconfig_t virtio_device_t:chr_file { read write };
description "ISC DHCP IPv4 server"
author "Stéphane Graber <stgraber@ubuntu.com>"
start on runlevel [2345]
stop on runlevel [!2345]
pre-start script
if [ ! -f /etc/default/isc-dhcp-server ]; then
echo "/etc/default/isc-dhcp-server does not exist! - Aborting..."
echo "Run 'dpkg-reconfigure isc-dhcp-server' to fix the problem."
stop
exit 0
fi
. /etc/default/isc-dhcp-server
if [ -f /etc/ltsp/dhcpd.conf ]; then
CONFIG_FILE=/etc/ltsp/dhcpd.conf
else
CONFIG_FILE=/etc/dhcp/dhcpd.conf
fi
if [ ! -f $CONFIG_FILE ]; then
echo "$CONFIG_FILE does not exist! - Aborting..."
echo "Please create and configure $CONFIG_FILE to fix the problem."
stop
exit 0
fi
if ! ip netns exec fw dhcpd -user dhcpd -group dhcpd -t -q -4 -cf $CONFIG_FILE > /dev/null 2>&1; then
echo "dhcpd self-test failed. Please fix the config file."
echo "The error was: "
ip netns exec fw dhcpd -user dhcpd -group dhcpd -t -4 -cf $CONFIG_FILE
stop
exit 0
fi
end script
respawn
script
if [ -f /etc/ltsp/dhcpd.conf ]; then
CONFIG_FILE=/etc/ltsp/dhcpd.conf
else
CONFIG_FILE=/etc/dhcp/dhcpd.conf
fi
. /etc/default/isc-dhcp-server
# Allow dhcp server to write lease and pid file as 'dhcpd' user
mkdir -p /var/run/dhcp-server
chown dhcpd:dhcpd /var/run/dhcp-server
# The leases files need to be root:root even when dropping privileges
[ -e /var/lib/dhcp/dhcpd.leases ] || touch /var/lib/dhcp/dhcpd.leases
chown root:root /var/lib/dhcp /var/lib/dhcp/dhcpd.leases
if [ -e /var/lib/dhcp/dhcpd.leases~ ]; then
chown root:root /var/lib/dhcp/dhcpd.leases~
fi
exec ip netns exec fw dhcpd -user dhcpd -group dhcpd -f -q -4 -pf /run/dhcp-server/dhcpd.pid -cf $CONFIG_FILE $INTERFACES
end script
export GATEWAY={{ pillar['fwdriver']['gateway'] }}
export AMQP_URI=amqp://{{ pillar['amqp']['user'] }}:{{ pillar['amqp']['password'] }}@{{ pillar['amqp']['host'] }}:{{ pillar['amqp']['port'] }}/{{ pillar['amqp']['vhost'] }}
export CACHE_URI={{ pillar['cache'] }}
export BRIDGE_TYPE=NONE
{{ pillar['fwdriver']['user'] }} ALL= (ALL) NOPASSWD: /sbin/ip netns exec fw ip addr *, /sbin/ip netns exec fw ip ro *, /sbin/ip netns exec fw ip link *, /sbin/ip netns exec fw ipset *, /usr/bin/ovs-vsctl, /sbin/ip netns exec fw iptables-restore -c, /sbin/ip netns exec fw ip6tables-restore -c, /etc/init.d/isc-dhcp-server restart, /sbin/ip link *, /sbin/iptables-restore -c, /sbin/ip6tables-restore -c, /sbin/ipset *, /bin/systemctl restart dhcpd
Defaults: fw !requiretty
include:
- common
gitrepo_fwdriver:
git.latest:
- name: {{ pillar['fwdriver']['repo_name'] }}
- rev: {{ pillar['fwdriver']['repo_revision'] }}
- target: /home/{{ pillar['fwdriver']['user'] }}/fwdriver
- user: {{ pillar['fwdriver']['user'] }}
- require:
- pkg: git
include:
- fwdriver.gitrepo
- fwdriver.virtualenv
- fwdriver.configuration
firewall:
pkg.installed:
- pkgs:
{% if grains['os_family'] == 'RedHat' %}
- zlib-devel
- python-virtualenvwrapper
- python-devel
- libmemcached-devel
- python2-pip
- dhcp
{% else %}
- zlib1g-dev
- virtualenvwrapper
- python-dev
- libmemcached-dev
- openvswitch-switch
- python-pip
{% if grains['os'] != 'Debian' %}
{# No such package in Debian Jessie! #}
- openvswitch-controller
{% endif %}
- isc-dhcp-server
{% endif %}
- git
- ntp
- iptables
- ipset
- require:
- user: {{ pillar['fwdriver']['user'] }}
- require_in:
- git: gitrepo_fwdriver
- virtualenv: virtualenv_fwdriver
user:
- present
- name: {{ pillar['fwdriver']['user'] }}
- gid_from_name: True
service:
- enabled
- require:
- service: firewall-init
firewall-init:
service:
- enabled
virtualenv_fwdriver:
virtualenv.managed:
- name: /home/{{ pillar['fwdriver']['user'] }}/.virtualenvs/fw
- requirements: /home/{{ pillar['fwdriver']['user'] }}/fwdriver/requirements.txt
- user: {{ pillar['fwdriver']['user'] }}
- no_chown: true
postactivate:
file.managed:
- name: /home/{{ pillar['graphite']['user'] }}/.virtualenvs/graphite/bin/postactivate
- source: salt://graphite/files/postactivate
- template: jinja
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- mode: 700
requirements:
file.managed:
- name: /home/{{ pillar['graphite']['user'] }}/requirements.txt
- template: jinja
- source: salt://graphite/files/requirements.txt
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- require:
- user: {{ pillar['graphite']['user'] }}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/graphite.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://graphite/files/graphite.service
/etc/systemd/system/graphite-carbon.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://graphite/files/graphite-carbon.service
{% else %}
/etc/init/graphite.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://graphite/files/graphite.conf
/etc/init/graphite-carbon.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://graphite/files/graphite-carbon.conf
{% endif %}
/opt/graphite:
file.directory:
- makedirs: True
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- require:
- user: {{ pillar['graphite']['user'] }}
/opt/graphite/conf/carbon.conf:
file.managed:
- source: salt://graphite/files/carbon.conf
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- template: jinja
- makedirs: True
- require:
- user: {{ pillar['graphite']['user'] }}
/opt/graphite/conf/storage-schemas.conf:
file.managed:
- name: /opt/graphite/conf/storage-schemas.conf
- source: salt://graphite/files/storage-schemas.conf
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- makedirs: True
- require:
- user: {{ pillar['graphite']['user'] }}
/opt/graphite/webapp/graphite/local_settings.py:
file.managed:
- source: salt://graphite/files/local_settings.py
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- template: jinja
- makedirs: True
- require:
- user: {{ pillar['graphite']['user'] }}
description "CIRCLE Cloud Graphite monitoring server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 30 30
setgid {{ pillar['graphite']['user'] }}
setuid {{ pillar['graphite']['user'] }}
env HOME=/home/{{ pillar['graphite']['user'] }}
env GRAPHITE_ROOT=/opt/graphite
env PYTHONPATH=/opt/graphite/lib
script
. $HOME/.virtualenvs/graphite/local/bin/activate
cd /opt/graphite/bin/
exec twistd --nodaemon --reactor=epoll --no_save carbon-cache
end script
[Unit]
Description=Graphite Carbon
After=network.target
[Service]
User={{ pillar['graphite']['user'] }}
Group={{ pillar['graphite']['user'] }}
Environment=PYTHONPATH=/opt/graphite/lib GRAPHITE_ROOT=/opt/graphite
WorkingDirectory=/opt/graphite/bin/
ExecStart=/bin/bash -c "source /etc/profile; workon graphite; exec twistd --nodaemon --reactor=epoll --no_save carbon-cache"
Restart=always
[Install]
WantedBy=multi-user.target
diff --git a/render/evaluator.py b/render/evaluator.py
index 70490a2..ee7cfd1 100644
--- a/render/evaluator.py
+++ b/render/evaluator.py
@@ -37,7 +37,7 @@ def evaluateTokens(requestContext, tokens):
return float(tokens.number.scientific[0])
elif tokens.string:
- return str(tokens.string)[1:-1]
+ return unicode(tokens.string)[1:-1]
elif tokens.boolean:
return tokens.boolean[0] == 'true'
diff --git a/render/glyph.py b/render/glyph.py
index a2cc893..7daadce 100644
--- a/render/glyph.py
+++ b/render/glyph.py
@@ -181,7 +181,7 @@ class Graph:
self.drawRectangle( 0, 0, self.width, self.height )
if 'colorList' in params:
- colorList = unquote_plus( str(params['colorList']) ).split(',')
+ colorList = unquote_plus( unicode(params['colorList']) ).split(',')
else:
colorList = self.defaultColorList
self.colors = itertools.cycle( colorList )
@@ -572,7 +572,7 @@ class LineGraph(Graph):
if 'yUnitSystem' not in params:
params['yUnitSystem'] = 'si'
else:
- params['yUnitSystem'] = str(params['yUnitSystem']).lower()
+ params['yUnitSystem'] = unicode(params['yUnitSystem']).lower()
if params['yUnitSystem'] not in UnitSystems.keys():
params['yUnitSystem'] = 'si'
@@ -630,11 +630,11 @@ class LineGraph(Graph):
self.setColor( self.foregroundColor )
if params.get('title'):
- self.drawTitle( str(params['title']) )
+ self.drawTitle( unicode(params['title']) )
if params.get('vtitle'):
- self.drawVTitle( str(params['vtitle']) )
+ self.drawVTitle( unicode(params['vtitle']) )
if self.secondYAxis and params.get('vtitleRight'):
- self.drawVTitle( str(params['vtitleRight']), rightAlign=True )
+ self.drawVTitle( unicode(params['vtitleRight']), rightAlign=True )
self.setFont()
if not params.get('hideLegend', len(self.data) > settings.LEGEND_MAX_ITEMS):
@@ -1582,7 +1582,7 @@ class PieGraph(Graph):
if slice['value'] < 10 and slice['value'] != int(slice['value']):
label = "%.2f" % slice['value']
else:
- label = str(int(slice['value']))
+ label = unicode(int(slice['value']))
extents = self.getExtents(label)
theta = slice['midAngle']
x = self.x0 + (self.radius / 2.0 * math.cos(theta))
diff --git a/render/hashing.py b/render/hashing.py
index 6575650..45f1bfe 100644
--- a/render/hashing.py
+++ b/render/hashing.py
@@ -49,7 +49,7 @@ def stripControlChars(string):
def compactHash(string):
hash = md5()
- hash.update(string)
+ hash.update(string.encode('utf-8'))
return hash.hexdigest()
description "CIRCLE Cloud Graphite monitoring server"
start on runlevel [2345]
stop on runlevel [!2345]
respawn
respawn limit 30 30
setgid {{ pillar['graphite']['user'] }}
setuid {{ pillar['graphite']['user'] }}
env HOME=/home/{{ pillar['graphite']['user'] }}
script
. $HOME/.virtualenvs/graphite/local/bin/activate
cd /opt/graphite/webapp/graphite
PYTHONPATH=/opt/graphite/webapp exec django-admin.py runserver [::]:8081 --settings=graphite.settings
end script
[Unit]
Description=Graphite
After=network.target
[Service]
User={{ pillar['graphite']['user'] }}
Group={{ pillar['graphite']['user'] }}
WorkingDirectory=/opt/graphite/webapp/graphite
ExecStart=/bin/bash -c "source /etc/profile; workon graphite; PYTHONPATH=/opt/graphite/webapp exec django-admin.py runserver [::]:8081 --settings=graphite.settings"
Restart=always
[Install]
WantedBy=multi-user.target
## Graphite local_settings.py
# Edit this file to customize the default Graphite webapp settings
#
# Additional customizations to Django settings can be added to this file as well
#####################################
# General Configuration #
#####################################
# Set this to a long, random unique string to use as a secret key for this
# install. This key is used for salting of hashes used in auth tokens,
# CRSF middleware, cookie storage, etc. This should be set identically among
# instances if used behind a load balancer.
SECRET_KEY = "{{ pillar['graphite']['secret_key'] }}"
# In Django 1.5+ set this to the list of hosts your graphite instances is
# accessible as. See:
# https://docs.djangoproject.com/en/dev/ref/settings/#std:setting-ALLOWED_HOSTS
#ALLOWED_HOSTS = [ '*' ]
# Set your local timezone (Django's default is America/Chicago)
# If your graphs appear to be offset by a couple hours then this probably
# needs to be explicitly set to your local timezone.
TIME_ZONE = "{{ pillar['timezone'] }}"
# Override this to provide documentation specific to your Graphite deployment
#DOCUMENTATION_URL = "http://graphite.readthedocs.org/"
# Logging
#LOG_RENDERING_PERFORMANCE = True
#LOG_CACHE_PERFORMANCE = True
#LOG_METRIC_ACCESS = True
# Enable full debug page display on exceptions (Internal Server Error pages)
#DEBUG = True
# If using RRD files and rrdcached, set to the address or socket of the daemon
#FLUSHRRDCACHED = 'unix:/var/run/rrdcached.sock'
# This lists the memcached servers that will be used by this webapp.
# If you have a cluster of webapps you should ensure all of them
# have the *exact* same value for this setting. That will maximize cache
# efficiency. Setting MEMCACHE_HOSTS to be empty will turn off use of
# memcached entirely.
#
# You should not use the loopback address (127.0.0.1) here if using clustering
# as every webapp in the cluster should use the exact same values to prevent
# unneeded cache misses. Set to [] to disable caching of images and fetched data
#MEMCACHE_HOSTS = ['10.10.10.10:11211', '10.10.10.11:11211', '10.10.10.12:11211']
#DEFAULT_CACHE_DURATION = 60 # Cache images and data for 1 minute
#####################################
# Filesystem Paths #
#####################################
# Change only GRAPHITE_ROOT if your install is merely shifted from /opt/graphite
# to somewhere else
#GRAPHITE_ROOT = '/opt/graphite'
# Most installs done outside of a separate tree such as /opt/graphite will only
# need to change these three settings. Note that the default settings for each
# of these is relative to GRAPHITE_ROOT
#CONF_DIR = '/opt/graphite/conf'
#STORAGE_DIR = '/opt/graphite/storage'
#CONTENT_DIR = '/opt/graphite/webapp/content'
# To further or fully customize the paths, modify the following. Note that the
# default settings for each of these are relative to CONF_DIR and STORAGE_DIR
#
## Webapp config files
#DASHBOARD_CONF = '/opt/graphite/conf/dashboard.conf'
#GRAPHTEMPLATES_CONF = '/opt/graphite/conf/graphTemplates.conf'
## Data directories
# NOTE: If any directory is unreadable in DATA_DIRS it will break metric browsing
#WHISPER_DIR = '/opt/graphite/storage/whisper'
#RRD_DIR = '/opt/graphite/storage/rrd'
#DATA_DIRS = [WHISPER_DIR, RRD_DIR] # Default: set from the above variables
#LOG_DIR = '/opt/graphite/storage/log/webapp'
#INDEX_FILE = '/opt/graphite/storage/index' # Search index file
#####################################
# Email Configuration #
#####################################
# This is used for emailing rendered Graphs
# Default backend is SMTP
#EMAIL_BACKEND = 'django.core.mail.backends.smtp.EmailBackend'
#EMAIL_HOST = 'localhost'
#EMAIL_PORT = 25
#EMAIL_HOST_USER = ''
#EMAIL_HOST_PASSWORD = ''
#EMAIL_USE_TLS = False
# To drop emails on the floor, enable the Dummy backend:
#EMAIL_BACKEND = 'django.core.mail.backends.dummy.EmailBackend'
#####################################
# Authentication Configuration #
#####################################
## LDAP / ActiveDirectory authentication setup
#USE_LDAP_AUTH = True
#LDAP_SERVER = "ldap.mycompany.com"
#LDAP_PORT = 389
# OR
#LDAP_URI = "ldaps://ldap.mycompany.com:636"
#LDAP_SEARCH_BASE = "OU=users,DC=mycompany,DC=com"
#LDAP_BASE_USER = "CN=some_readonly_account,DC=mycompany,DC=com"
#LDAP_BASE_PASS = "readonly_account_password"
#LDAP_USER_QUERY = "(username=%s)" #For Active Directory use "(sAMAccountName=%s)"
#
# If you want to further customize the ldap connection options you should
# directly use ldap.set_option to set the ldap module's global options.
# For example:
#
#import ldap
#ldap.set_option(ldap.OPT_X_TLS_REQUIRE_CERT, ldap.OPT_X_TLS_ALLOW)
#ldap.set_option(ldap.OPT_X_TLS_CACERTDIR, "/etc/ssl/ca")
#ldap.set_option(ldap.OPT_X_TLS_CERTFILE, "/etc/ssl/mycert.pem")
#ldap.set_option(ldap.OPT_X_TLS_KEYFILE, "/etc/ssl/mykey.pem")
# See http://www.python-ldap.org/ for further details on these options.
## REMOTE_USER authentication. See: https://docs.djangoproject.com/en/dev/howto/auth-remote-user/
#USE_REMOTE_USER_AUTHENTICATION = True
# Override the URL for the login link (e.g. for django_openid_auth)
#LOGIN_URL = '/account/login'
##########################
# Database Configuration #
##########################
# By default sqlite is used. If you cluster multiple webapps you will need
# to setup an external database (such as MySQL) and configure all of the webapp
# instances to use the same database. Note that this database is only used to store
# Django models such as saved graphs, dashboards, user preferences, etc.
# Metric data is not stored here.
#
# DO NOT FORGET TO RUN 'manage.py syncdb' AFTER SETTING UP A NEW DATABASE
#
# The following built-in database engines are available:
# django.db.backends.postgresql # Removed in Django 1.4
# django.db.backends.postgresql_psycopg2
# django.db.backends.mysql
# django.db.backends.sqlite3
# django.db.backends.oracle
#
# The default is 'django.db.backends.sqlite3' with file 'graphite.db'
# located in STORAGE_DIR
#
#DATABASES = {
# 'default': {
# 'NAME': '/opt/graphite/storage/graphite.db',
# 'ENGINE': 'django.db.backends.sqlite3',
# 'USER': '',
# 'PASSWORD': '',
# 'HOST': '',
# 'PORT': ''
# }
#}
#
#########################
# Cluster Configuration #
#########################
# (To avoid excessive DNS lookups you want to stick to using IP addresses only in this entire section)
#
# This should list the IP address (and optionally port) of the webapp on each
# remote server in the cluster. These servers must each have local access to
# metric data. Note that the first server to return a match for a query will be
# used.
#CLUSTER_SERVERS = ["10.0.2.2:80", "10.0.2.3:80"]
## These are timeout values (in seconds) for requests to remote webapps
#REMOTE_STORE_FETCH_TIMEOUT = 6 # Timeout to fetch series data
#REMOTE_STORE_FIND_TIMEOUT = 2.5 # Timeout for metric find requests
#REMOTE_STORE_RETRY_DELAY = 60 # Time before retrying a failed remote webapp
#REMOTE_FIND_CACHE_DURATION = 300 # Time to cache remote metric find results
## Remote rendering settings
# Set to True to enable rendering of Graphs on a remote webapp
#REMOTE_RENDERING = True
# List of IP (and optionally port) of the webapp on each remote server that
# will be used for rendering. Note that each rendering host should have local
# access to metric data or should have CLUSTER_SERVERS configured
#RENDERING_HOSTS = []
#REMOTE_RENDER_CONNECT_TIMEOUT = 1.0
# If you are running multiple carbon-caches on this machine (typically behind a relay using
# consistent hashing), you'll need to list the ip address, cache query port, and instance name of each carbon-cache
# instance on the local machine (NOT every carbon-cache in the entire cluster). The default cache query port is 7002
# and a common scheme is to use 7102 for instance b, 7202 for instance c, etc.
#
# You *should* use 127.0.0.1 here in most cases
#CARBONLINK_HOSTS = ["127.0.0.1:7002:a", "127.0.0.1:7102:b", "127.0.0.1:7202:c"]
#CARBONLINK_TIMEOUT = 1.0
#####################################
# Additional Django Settings #
#####################################
# Uncomment the following line for direct access to Django settings such as
# MIDDLEWARE_CLASSES or APPS
#from graphite.app_settings import *
export AMQP_URI=amqp://{{ pillar['amqp']['user'] }}:{{ pillar['amqp']['password'] }}@{{ pillar['amqp']['host'] }}:{{ pillar['amqp']['port'] }}/{{ pillar['amqp']['vhost'] }}
export CACHE_URI={{ pillar['cache'] }}
[carbon]
pattern = ^carbon\.
retentions = 60:90d
[default]
pattern = .*
retentions = 60s:1d,240s:1w,1h:30d,6h:1y
#!/bin/bash
source /home/{{ pillar['graphite']['user'] }}/.virtualenvs/graphite/bin/activate;
cd /opt/graphite/webapp/graphite/
PYTHONPATH=/opt/graphite/webapp django-admin.py syncdb --settings=graphite.settings --noinput
include:
- graphite.rabbitmq
- graphite.virtualenv
- graphite.configuration
graphite:
pkg.installed:
- pkgs:
- git
- ntp
{% if grains['os_family'] == 'RedHat' %}
- python2-pip
- pycairo
- python-devel
- python-virtualenvwrapper
- dejavu-sans-fonts
{% else %}
- python-pip
- python-cairo
- python-dev
- virtualenvwrapper
{% endif %}
- require:
- user: {{ pillar['graphite']['user'] }}
- require_in:
- virtualenv: virtualenv_graphite
- service: graphite
- service: graphite-carbon
user:
- present
- name: {{ pillar['graphite']['user'] }}
- gid_from_name: True
service:
- running
- enable: True
graphite-carbon:
service:
- running
- enable: True
rabbitmq-server_monitor:
pkg.installed:
- name: rabbitmq-server
service:
- running
- name: rabbitmq-server
- require:
- pkg: rabbitmq-server
rabbitmq_user_monitor:
rabbitmq_user.present:
- name: {{ pillar['graphite']['user'] }}
- password: {{ pillar['graphite']['password'] }}
virtual_host_monitor:
rabbitmq_vhost.present:
- name: {{ pillar['graphite']['vhost']}}
- user: {{ pillar['graphite']['user'] }}
- conf: .*
- write: .*
- read: .*
virtualenv_graphite:
virtualenv.managed:
- name: /home/{{ pillar['graphite']['user'] }}/.virtualenvs/graphite
- requirements: /home/{{ pillar['graphite']['user'] }}/requirements.txt
- user: {{ pillar['graphite']['user'] }}
- require:
- user: {{ pillar['graphite']['user'] }}
- file: /home/{{ pillar['graphite']['user'] }}/requirements.txt
- file: /opt/graphite
global-site-packages:
file.absent:
- name: /home/{{pillar['graphite']['user'] }}/.virtualenvs/graphite/lib/python2.7/no-global-site-packages.txt
- require:
- virtualenv: virtualenv_graphite
unicode-fix-diff:
file.managed:
- name: /home/{{pillar['graphite']['user'] }}/graphite-unicode-fix.diff
- template: jinja
- source: salt://graphite/files/graphite-unicode-fix.diff
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
unicode-fix:
cmd.run:
- user: {{ pillar['graphite']['user'] }}
- cwd: /opt/graphite/webapp/graphite
- name: patch -N -p1 < /home/{{pillar['graphite']['user'] }}/graphite-unicode-fix.diff
- onlyif: patch -N --dry-run --silent -p1 < /home/{{pillar['graphite']['user'] }}/graphite-unicode-fix.diff
- require:
- virtualenv: virtualenv_graphite
- user: {{ pillar['graphite']['user'] }}
- file: unicode-fix-diff
salt://graphite/files/syncdb.sh:
cmd.script:
- template: jinja
- user: {{ pillar['graphite']['user'] }}
- require:
- virtualenv: virtualenv_graphite
- user: {{ pillar['graphite']['user'] }}
include:
- common
agentgit:
git.latest:
- name: {{ pillar['agent']['repo_name'] }}
- rev: {{ pillar['agent']['repo_revision'] }}
- target: /home/{{ pillar['user'] }}/agent/agent-linux
- user: {{ pillar['user'] }}
- require:
- pkg: git
manager_postactivate:
file.managed:
- name: /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
- source: salt://manager/files/postactivate
- template: jinja
- user: {{ pillar['user'] }}
- mode: 700
portal.conf:
file.managed:
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- name: /etc/systemd/system/portal.service
{% else %}
- name: /etc/init/portal.conf
{% endif %}
- user: root
- group: root
- template: jinja
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
{% if pillar['deployment_type'] == 'production' %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal-uwsgi.service
{% else %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal.service
{% endif %}
{% else %}
{% if pillar['deployment_type'] == 'production' %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal-uwsgi.conf
{% else %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal.conf
{% endif %}
{% endif %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/manager.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/manager.service
/etc/systemd/system/managercelery@.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/managercelery@.service
{% else %}
/etc/init/manager.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/manager.conf
/etc/init/mancelery.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/mancelery.conf
/etc/init/moncelery.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/moncelery.conf
/etc/init/slowcelery.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/slowcelery.conf
{% endif %}
salt://manager/files/init.sh:
cmd.script:
- template: jinja
- user: {{ pillar['user'] }}
- stateful: true
- require:
- virtualenv: virtualenv_manager
- file: /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
- user: {{ pillar['user'] }}
#!/bin/bash
cd /home/{{ pillar['user'] }}/circle/circle/
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/activate
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
MANAGE="python /home/{{ pillar['user'] }}/circle/circle/manage.py"
bower install
$MANAGE compileless
$MANAGE compilejsi18n -o dashboard/static/jsi18n
COLLECTED=$($MANAGE collectstatic --noinput |
awk '/static files copied to/ {print $1}')
OLD_SHA=$(sha1sum locale/hu/LC_MESSAGES/*.mo)
$MANAGE compilemessages
NEW_SHA=$(sha1sum locale/hu/LC_MESSAGES/*.mo)
echo "$COLLECTED $NEW_SHA $OLD_SHA"
if [ "$NEW_SHA" != "$OLD_SHA" -o "$COLLECTED" -ne 0 ]; then
CHANGED=yes
else
CHANGED=no
fi
echo "changed=$CHANGED comment='copied: $COLLECTED'"
#!/bin/bash
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/activate
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
{% set fw = pillar['fwdriver'] %}
HOSTNAME=$(hostname -s)
EXTRAPARAMS=""
if [ "{{ pillar['vmdriver']['hypervisor_type'] }}" = "kvm" ]; then
EXTRAPARAMS="--kvm-present"
fi
exec python /home/{{ pillar['user'] }}/circle/circle/manage.py init \
--external-net={{ fw['external_net'] }} \
--management-net={{ fw['management_net'] }} \
--vm-net={{ fw['vm_net'] }} \
--admin-user={{ pillar['admin_user'] }} \
--admin-pass={{ pillar['admin_pass'] }} \
--datastore-queue={{ pillar['storagedriver']['queue_name'] }} \
--firewall-queue={{ fw['queue_name'] }} \
--external-if={{ fw['external_if'] }} \
--management-if={{ fw['management_if'] }} \
--vm-if={{ fw['vm_if'] }} \
--node-hostname=$HOSTNAME \
--node-mac="99:AA:BB:CC:DD:EE" \
--node-ip="127.0.0.1" \
--node-name=$HOSTNAME \
$EXTRAPARAMS
ignore_invalid_headers on;
server {
listen 443 ssl default;
ssl on;
ssl_certificate /etc/ssl/certs/circle.pem;
ssl_certificate_key /etc/ssl/certs/circle.pem;
{% if pillar['deployment_type'] == "production" %}
location /media {
alias /home/{{ pillar['user'] }}/circle/circle/media; # your Django project's media files
}
location /static {
alias /home/{{ pillar['user'] }}/circle/circle/static_collected; # your Django project's static files
}
{% endif %}
location / {
{% if pillar['deployment_type'] == "production" %}
uwsgi_pass unix:///tmp/uwsgi.sock;
include /etc/nginx/uwsgi_params; # or the uwsgi_params you installed manually
{% else %}
proxy_pass http://localhost:8080;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $http_Host;
proxy_set_header X-Forwarded-Protocol https;
{% endif %}
}
location /vnc/ {
proxy_pass http://localhost:9999;
proxy_set_header X-Real-IP $remote_addr;
proxy_set_header Host $host;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
# WebSocket support (nginx 1.4)
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "upgrade";
}
}
server {
listen 80 default;
rewrite ^ https://$host/; # permanent;
}
# For more information on configuration, see:
# * Official English Documentation: http://nginx.org/en/docs/
# * Official Russian Documentation: http://nginx.org/ru/docs/
user nginx;
worker_processes 1;
error_log /var/log/nginx/error.log;
#error_log /var/log/nginx/error.log notice;
#error_log /var/log/nginx/error.log info;
pid /run/nginx.pid;
events {
worker_connections 1024;
}
http {
include /etc/nginx/mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log /var/log/nginx/access.log main;
sendfile on;
#tcp_nopush on;
#keepalive_timeout 0;
keepalive_timeout 65;
#gzip on;
index index.html index.htm;
# Load modular configuration files from the /etc/nginx/conf.d directory.
# See http://nginx.org/en/docs/ngx_core_module.html#include
# for more information.
include /etc/nginx/conf.d/*.conf;
}
module nginx 1.0;
require {
type initrc_tmp_t;
type httpd_t;
type initrc_t;
class sock_file write;
class unix_stream_socket connectto;
}
#============= httpd_t ==============
allow httpd_t initrc_t:unix_stream_socket connectto;
#!!!! This avc is allowed in the current policy
allow httpd_t initrc_tmp_t:sock_file write;
# TYPE DATABASE USER ADDRESS METHOD
# "local" is for Unix domain socket connections only
local all all peer
# IPv4 local connections:
host all all 127.0.0.1/32 md5
# IPv6 local connections:
host all all ::1/128 md5
# DO NOT EDIT THIS FILE
export AMQP_URI='amqp://{{ pillar['amqp']['user'] }}:{{ pillar['amqp']['password'] }}@{{ pillar['amqp']['host'] }}:{{ pillar['amqp']['port'] }}/{{ pillar['amqp']['vhost'] }}'
export CACHE_URI='{{ pillar['cache'] }}'
export DJANGO_SETTINGS_MODULE='circle.settings.{{ pillar['deployment_type'] }}'
export DJANGO_TIME_ZONE=UTC
export DJANGO_DB_HOST='localhost'
export DJANGO_DB_PASSWORD='{{ pillar['database']['password'] }}'
export DJANGO_FIREWALL_SETTINGS='{"dns_ip": "8.8.8.8", "dns_hostname":
"localhost", "dns_ttl": "300", "reload_sleep": "10",
"rdns_ip": "8.8.8.8", "default_vlangroup": "portforward"}'
export DJANGO_ALLOWED_HOSTS='*'
export DJANGO_MEMCACHED='localhost:11211'
#export DJANGO_SAML=TRUE
#export DJANGO_URL='<%= @django_url %>'
#export DJANGO_SAML_ATTRIBUTE_MAPPING='{"mail": ["email"], "sn": ["last_name"], "eduPersonPrincipalName": ["username"], "givenName": ["first_name"]}'
#export DJANGO_SAML_GROUP_OWNER_ATTRIBUTES='eduPersonScopedAffiliation'
#export DJANGO_SAML_GROUP_ATTRIBUTES='eduPersonScopedAffiliation'
export GRAPHITE_HOST='localhost'
export GRAPHITE_PORT='8081'
export GRAPHITE_HOST='{{ pillar['graphite']['host'] }}'
export GRAPHITE_AMQP_PORT='{{ pillar['graphite']['port'] }}'
export GRAPHITE_AMQP_USER='{{ pillar['graphite']['user'] }}'
export GRAPHITE_AMQP_PASSWORD='{{ pillar['graphite']['password'] }}'
export GRAPHITE_AMQP_QUEUE='{{ pillar['graphite']['queue'] }}'
export GRAPHITE_AMQP_VHOST='{{ pillar['graphite']['vhost'] }}'
export SECRET_KEY='{{ pillar['secret_key'] }}'
export PROXY_SECRET='{{ pillar['proxy_secret'] }}'
export DEFAULT_FROM_EMAIL='root@localhost'
#LOCAL="/home//.virtualenvs/circle/bin/postactivate.local"
#test -f "$LOCAL" && . "$LOCAL"
#!/bin/bash
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/activate >/dev/null 2>&1
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate >/dev/null 2>&1
MANAGE="python /home/{{ pillar['user'] }}/circle/circle/manage.py"
OUT=$( $MANAGE migrate 2>&1)
if [ $? -ne 0 ]; then
/usr/bin/python -c "import sys; import json; sys.stdout.write(json.dumps({'changed': False, 'comment': sys.stdin.read()}) + '\n')" <<< "$OUT"
exit 1
fi
COUNT=$(/bin/egrep " *Applying " -c <<< "$OUT")
if [ $? -eq 0 ]; then
CHANGED=yes
else
CHANGED=no
fi
echo "changed=$CHANGED comment='Migrated: $COUNT'"
include:
- common
gitrepo:
git.latest:
- name: {{ pillar['manager']['repo_name'] }}
- rev: {{ pillar['manager']['repo_revision'] }}
- target: /home/{{ pillar['user'] }}/circle
- user: {{ pillar['user'] }}
- require:
- pkg: git
include:
- manager.pipeline
- manager.gitrepo
- manager.agentgit
- manager.postgres
- manager.rabbitmq
- manager.virtualenv
- manager.configuration
- manager.nginx
manager:
pkg.installed:
- pkgs:
- postgresql
- git
- ntp
- rabbitmq-server
- memcached
- gettext
- wget
- swig
{% if grains['os_family'] == 'RedHat' %}
- python2-pip
- libffi-devel
- openssl-devel
- libmemcached-devel
- postgresql-devel
- postgresql-libs
- postgresql-server
- libxml2-devel
- libxslt-devel
- python-devel
- python-virtualenvwrapper
{% else %}
- python-pip
- libffi-dev
- libssl-dev
- libmemcached-dev
- libpq-dev
- libxml2-dev
- libxslt1-dev
- python-dev
- virtualenvwrapper
{% endif %}
- require_in:
- service: postgres_service
user:
- present
- name: {{ pillar['user'] }}
- gid_from_name: True
- shell: /bin/bash
- groups:
{% if grains['os_family'] == 'RedHat' %}
- wheel
{% else %}
- sudo
{% endif %}
service:
- running
- enable: True
- watch:
- file: manager_postactivate
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- file: /etc/systemd/system/manager.service
- file: /etc/systemd/system/managercelery@.service
{% else %}
- file: /etc/init/manager.conf
- file: /etc/init/mancelery.conf
- file: /etc/init/moncelery.conf
- file: /etc/init/slowcelery.conf
{% endif %}
- sls: manager.gitrepo
portal:
service:
- running
- enable: True
- watch:
- file: manager_postactivate
- file: portal.conf
- sls: manager.gitrepo
memcached:
service:
- running
- enable: True
- require:
- pkg: manager
nginx:
service.running:
- enable: True
- watch:
- pkg: nginx
- cmd: circlecert
- file: nginxdefault
- file: nginx_home_permission
{% if grains['os_family'] == 'RedHat' %}
- file: nginxconf
- cmd: nginx_no_private_temp
{% endif %}
pkg:
- installed
nginx_home_permission:
file.directory:
- name: /home/{{ pillar['user'] }}
- user: {{ pillar['user'] }}
- dir_mode: 711
circlecert:
cmd.run:
{% if grains['os_family'] == 'RedHat' %}
- name: ./make-dummy-cert circle.pem
{% else %}
- name: openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout circle.key -out circle.crt -subj '/CN=localhost/O=My Company Name LTD./C=US' && cat circle.key circle.crt > circle.pem && rm circle.key circle.crt; chmod 600 circle.pem
{% endif %}
- cwd: /etc/ssl/certs/
- creates: /etc/ssl/certs/circle.pem
{% if grains['os_family'] == 'RedHat' %}
nginx_selinux_pkgs:
pkg.installed:
- pkgs:
- policycoreutils
- policycoreutils-python
nginx_httpd_can_network_connect:
selinux.boolean:
- name: httpd_can_network_connect
- value: True
- persist: True
- require:
- pkg: nginx_selinux_pkgs
nginx_httpd_read_user_content:
selinux.boolean:
- name: httpd_read_user_content
- value: True
- persist: True
- require:
- pkg: nginx_selinux_pkgs
/root/nginx.te:
file.managed:
- source: salt://manager/files/nginx.te
- template: jinja
- mode: 644
nginx_semodule:
cmd.run:
- cwd: /root
- user: root
- name: checkmodule -M -m -o nginx.mod nginx.te; semodule_package -o nginx.pp -m nginx.mod; semodule -i nginx.pp
- unless: semodule -l |grep -qs ^nginx
- require:
- file: /root/nginx.te
- pkg: nginx_selinux_pkgs
nginx_no_private_temp:
cmd.run:
- user: root
- name: sed -i "/PrivateTmp/d" /usr/lib/systemd/system/nginx.service
- require:
- pkg: nginx
{% endif %}
nginxdefault:
file.managed:
{% if grains['os_family'] == 'RedHat' %}
- name: /etc/nginx/conf.d/default.conf
{% else %}
- name: /etc/nginx/sites-enabled/default
{% endif %}
- template: jinja
- source: salt://manager/files/nginx-default-site.conf
- user: root
- group: root
- require:
- pkg: nginx
{% if grains['os_family'] == 'RedHat' %}
nginxconf:
file.managed:
- name: /etc/nginx/nginx.conf
- template: jinja
- source: salt://manager/files/nginx.conf
- user: root
- group: root
- require:
- pkg: nginx
{% endif %}
{% if grains['os'] == 'Ubuntu' or grains['os'] == 'Debian' %}
nodejs-legacy:
pkg.installed
{% endif %}
npm:
{% if grains['os'] == 'Ubuntu' or grains['os'] == 'Debian' %}
pkg.installed:
- require:
- pkg: nodejs-legacy
{% else %}
pkg.installed
{% endif %}
bower:
npm.installed:
- require:
- pkg: npm
less:
npm.installed:
- require:
- pkg: npm
yuglify:
npm.installed:
- require:
- pkg: npm
{% if grains['os_family'] == 'RedHat' %}
postgresql-server:
pkg.installed
postgresql_initdb:
cmd.run:
- cwd: /
- user: root
- name: postgresql-setup initdb
- unless: test -f /var/lib/pgsql/data/postgresql.conf
- env:
LC_ALL: C.UTF-8
file.managed:
- name: /var/lib/pgsql/data/pg_hba.conf
- template: jinja
- source: salt://manager/files/pg_hba.conf
- user: postgres
- group: postgres
- mode: 600
- require:
- cmd: postgresql_initdb
{% endif %}
postgres_service:
service.running:
- name: postgresql
- enable: True
{% if grains['os_family'] == 'RedHat' %}
- require:
- file: postgresql_initdb
{% endif %}
dbuser:
postgres_user.present:
- name: {{ pillar['database']['user'] }}
- password: {{ pillar['database']['password'] }}
- user: postgres
- require:
- service: postgresql
database:
postgres_database.present:
- name: {{ pillar['database']['name'] }}
- encoding: UTF8
- lc_ctype: en_US.UTF8
- lc_collate: en_US.UTF8
- template: template0
- owner: {{ pillar['database']['user'] }}
- user: postgres
- require:
- service: postgresql
- postgres_user: dbuser
rabbitmq-server:
pkg.installed:
- name: rabbitmq-server
{% if grains['os_family'] == 'RedHat' %}
file.managed:
- name: /etc/rabbitmq/rabbitmq-env.conf
- contents: RABBITMQ_DIST_PORT=5671
{% endif %}
service.running:
- enable: True
- require:
- pkg: rabbitmq-server
{% if grains['os_family'] == 'RedHat' %}
- file: rabbitmq-server
{% endif %}
rabbitmq_user:
rabbitmq_user.present:
- name: {{ pillar['amqp']['user'] }}
- password: {{ pillar['amqp']['password'] }}
- require:
- service: rabbitmq-server
virtual_host:
rabbitmq_vhost.present:
- name: {{ pillar['amqp']['vhost']}}
- user: {{ pillar['amqp']['user'] }}
- conf: .*
- write: .*
- read: .*
- require:
- service: rabbitmq-server
include:
- common
# m2crypto workaround
# /usr/include/openssl/opensslconf.h:31: Error: CPP #error
# ""This openssl-devel package does not work your architecture?"".
# Use the -cpperraswarn option to continue swig processing.
{% if grains['os_family'] == 'RedHat' %}
m2crypto_swig_env:
environ.setenv:
- name: SWIG_FEATURES
- value: -D__x86_64__
{% endif %}
virtualenv_manager:
virtualenv.managed:
- name: /home/{{ pillar['user'] }}/.virtualenvs/circle
- requirements: /home/{{ pillar['user'] }}/circle/requirements/{{ pillar['deployment_type'] }}.txt
- user: {{ pillar['user'] }}
- cwd: /home/{{ pillar['user'] }}/circle/
- no_chown: true
- require:
- git: gitrepo
{% if grains['os_family'] == 'RedHat' %}
- environ: m2crypto_swig_env
{% endif %}
salt://manager/files/syncdb.sh:
cmd.script:
- template: jinja
- user: {{ pillar['user'] }}
- stateful: true
- require:
- virtualenv: virtualenv_manager
- file: /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
- user: {{ pillar['user'] }}
salt://manager/files/compile.sh:
cmd.script:
- template: jinja
- user: {{ pillar['user'] }}
- stateful: true
- require:
- virtualenv: virtualenv_manager
- file: /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
- user: {{ pillar['user'] }}
/home/{{ pillar['user'] }}/.virtualenvs/monitor-client/bin/postactivate:
file.managed:
- source: salt://monitor-client/files/postactivate
- template: jinja
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/monitor-client.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/monitor-client/miscellaneous/monitor-client.service
{% else %}
/etc/init/monitor-client.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/monitor-client/miscellaneous/monitor-client.conf
{% endif %}
export GRAPHITE_HOST='{{ pillar['graphite']['host'] }}'
export GRAPHITE_PORT='{{ pillar['graphite']['port'] }}'
export GRAPHITE_AMQP_USER='{{ pillar['graphite']['user'] }}'
export GRAPHITE_AMQP_PASSWORD='{{ pillar['graphite']['password'] }}'
export GRAPHITE_AMQP_QUEUE='{{ pillar['graphite']['queue'] }}'
export GRAPHITE_AMQP_VHOST='{{ pillar['graphite']['vhost'] }}'
include:
- common
gitrepo_monitor-client:
git.latest:
- name: {{ pillar['monitor-client']['repo_name'] }}
- rev: {{ pillar['monitor-client']['repo_revision'] }}
- target: /home/{{ pillar['user'] }}/monitor-client
- user: {{ pillar['user'] }}
- require:
- pkg: git
include:
- monitor-client.gitrepo
- monitor-client.virtualenv
- monitor-client.configuration
monitor-client:
pkg.installed:
- pkgs:
- git
- ntp
- wget
{% if grains['os_family'] == 'RedHat' %}
- python2-pip
- python-devel
- python-virtualenvwrapper
{% else %}
- python-pip
- python-dev
- virtualenvwrapper
{% endif %}
- require_in:
- git: gitrepo_monitor-client
- virtualenv: virtualenv_monitor-client
service:
- running
- enable: True
- watch:
- pkg: monitor-client
- sls: monitor-client.gitrepo
- sls: monitor-client.virtualenv
- sls: monitor-client.configuration
virtualenv_monitor-client:
virtualenv.managed:
- name: /home/{{ pillar['user'] }}/.virtualenvs/monitor-client
- requirements: /home/{{ pillar['user'] }}/monitor-client/requirements.txt
- user: {{ pillar['user'] }}
- no_chown: true
#!/bin/bash
sed -i '/HWADDR=.*/d' /etc/sysconfig/network-scripts/ifcfg-vm
sed -i -e \$aNM_CONTROLLED=\"no\" /etc/sysconfig/network-scripts/ifcfg-vm
/bin/systemctl daemon-reload
ifup vm
systemctl restart firewall
systemctl restart dhcpd
exit 0
# systemd service file extras added by CIRCLE Salt installer:
# openvswitch and virtual network interface must be up before
# dhcpd is started
[Unit]
After=openvswitch-switch.service
[Service]
ExecStartPre=-/sbin/ifup vm
{# TODO: change 'vm' to pillar['fwdriver']['vm_if'] ? #}
{# TODO: similar patch for firewall.service ? #}
NETWORKING_IPV6=yes
IPV6FORWARDING=yes
#!/bin/bash
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/activate
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
python /home/{{ pillar['user'] }}/circle/circle/manage.py reload_firewall --sync --timeout={{ pillar['fwdriver']['reload_firewall_timeout'] }}
ovs-if:
cmd.run:
- name: ovs-vsctl add-port cloud vm tag=2 -- set Interface vm type=internal
- unless: ovs-vsctl list-ifaces cloud | grep "^vm$"
vm:
network.managed:
- enabled: True
- type: eth
- proto: none
- ipaddr: {{ pillar['fwdriver']['vm_net_ip'] }}
- netmask: {{ pillar['fwdriver']['vm_net_mask'] }}
- pre_up_cmds:
{% if grains['os_family'] == 'RedHat' %}
- /bin/systemctl restart openvswitch
{% elif grains['os'] == 'Debian' %}
- /bin/systemctl restart openvswitch-switch
{% else %}
- /etc/init.d/openvswitch-switch restart
{% endif %}
- require:
- cmd: ovs-if
{% if grains['os'] == 'Debian' %}
symlink_dhcpd:
file.symlink:
- name: /etc/init.d/dhcpd
- target: /etc/init.d/isc-dhcp-server
- force: True
cmd.run:
- name: /bin/systemctl daemon-reload
- require:
- file: symlink_dhcpd
{% endif %}
firewall2:
service:
- name: firewall
- running
- require:
- network: vm
reload_firewall:
cmd.script:
- name: salt://network/files/reload_firewall.sh
- template: jinja
- user: {{ pillar['user'] }}
- require:
- service: firewall2
{% if grains['os'] == 'Debian' %}
- cmd: symlink_dhcpd
{% endif %}
{% if grains['os_family'] == 'RedHat' %}
net_config:
file.managed:
- name: /etc/sysconfig/network
- source: salt://network/files/network
- user: root
- group: root
- mode: 644
fix_dhcp:
cmd.script:
- name: salt://network/files/fix_dhcp.sh
- require:
- cmd: reload_firewall
- file: net_config
{% endif %}
isc-dhcp-server:
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
cmd.run:
- name: /bin/systemctl restart dhcpd
{% if grains['os_family'] == 'RedHat' %}
- watch:
- cmd: fix_dhcp
{% elif grains['os'] == 'Debian' %}
- watch:
- cmd: fix_dhcp_daemon_reload
{% endif %}
{% endif %}
service.running:
- enable: True
{% if grains['os_family'] == 'RedHat' %}
- watch:
- cmd: fix_dhcp
{% elif grains['os'] == 'Debian' %}
- watch:
- cmd: fix_dhcp_daemon_reload
{% endif %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- name: dhcpd
- require:
- cmd: isc-dhcp-server
{% endif %}
{% if grains['os'] == 'Debian' %}
{# For next reboot #}
after_openvswitch_conf:
file.managed:
- name: /etc/systemd/system/isc-dhcp-server.service.d/after_openvswitch.conf
- source: salt://network/files/fix_dhcp_Debian.conf
- user: root
- group: root
- template: jinja
- makedirs: True
fix_dhcp_daemon_reload:
cmd.run:
- name: /bin/systemctl daemon-reload
- require:
- file: after_openvswitch_conf
{% endif %}
nfs-client:
pkg.installed:
- pkgs:
{% if grains['os_family'] == 'RedHat' %}
- nfs-utils
{% else %}
- nfs-common
{% endif %}
- require_in:
- mount: /datastore
/datastore:
mount.mounted:
- device: {{ pillar['nfs']['server'] }}:/datastore
- fstype: nfs
- opts: rw,nfsvers=3,noatime
- dump: 0
- pass_num: 2
- persist: True
- mkmnt: True
include:
- profile
- agentdriver
- monitor-client
- vmdriver
- nfs-client
{% if grains['os_family'] == "RedHat" %}
openvswitch:
pkg.installed:
- sources:
- openvswitch: salt://openvswitch/files/openvswitch-2.3.1-1.x86_64.rpm
cmd.run:
- name: mkdir /etc/openvswitch; restorecon -R /etc/openvswitch/
- creates: /etc/openvswitch
- require:
- pkg: openvswitch
service:
- name: openvswitch
- running
- enable: True
- require:
- cmd: openvswitch
- required_in:
- cmd: ovs-bridge
{% endif %}
{% if grains['os']=='Debian' %}
{# For non-interactive shells, virtualenvwrapper commands
('workon' etc.) are not sourced automatically #}
/etc/profile:
file.append:
- text:
- "#Line below added for Debian by CIRCLE Salt installer"
- . /etc/bash_completion
{% endif %}
/home/{{ pillar['user'] }}/.virtualenvs/storagedriver/bin/postactivate:
file.managed:
- source: salt://storagedriver/files/postactivate
- template: jinja
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/storagecelery@.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/storagedriver/miscellaneous/storagecelery@.service
/etc/systemd/system/storage.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/storagedriver/miscellaneous/storage.service
{% else %}
/etc/init/storagecelery.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/storagedriver/miscellaneous/storagecelery.conf
/etc/init/storage.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['user'] }}/storagedriver/miscellaneous/storage.conf
{% endif %}
/datastore:
file.directory:
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
# will not be needed in the future, with new gc
/datastore/trash:
file.directory:
- user: {{ pillar['user'] }}
- group: {{ pillar['user'] }}
- mode: 755
- require:
- file: /datastore
/var/lib/libvirt/serial IN_CREATE setfacl -m u:{{ pillar['user'] }}:rw $@/$#
{{ pillar['nfs']['directory'] }} {{ pillar['nfs']['network'] }}(rw,async,insecure,no_subtree_check,no_root_squash)
This diff is collapsed. Click to expand it.
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment