Commit a0a63826 by Kálmán Viktor

Merge branch 'debian' into 'master'

Debian 8 compatible installer

Please test with supported distributions.


See merge request !5
parents aa5c987d 281aa630
# CIRCLE Project - Salt Installer
# Circle Project Salt Installer
## OS Support
## Install Salt
* Red Hat Linux family:
* Red Hat Enterprise Linux 7+
* Cent OS 7+
* Scientific Linux 7+
* Debian Linux family:
* Debian linux 8+
* Ubuntu linux 14.04 LTS
## Prerequisites
### Red Hat family
Install EPEL repository (if the link is broken, please contact us):
```bash
sudo add-apt-repository ppa:saltstack/salt
sudo apt-get update
sudo apt-get install salt-minion
sudo rpm -ivh http://dl.fedoraproject.org/pub/epel/7/x86_64/e/epel-release-7-5.noarch.rpm
```
## Configure salt
Open the salt minion configuration
Install some important packages:
```bash
sudo vim /etc/salt/minion
sudo yum install python-pip gcc vim git
```
Add these lines:
### Debian family
Install some important packages:
```bash
file_client: local
sudo apt-get update
sudo apt-get install python-pip vim git
```
file_roots:
base:
- /home/cloud/salt/salt
## Install Salt
pillar_roots:
base:
- /home/cloud/salt/pillar
```bash
sudo pip install salt==2014.7.1
```
## Get the installer
Clone circle installer git repository into cloud home
......@@ -38,95 +47,95 @@ git clone https://git.ik.bme.hu/circle/salt.git
## Change variables
Modify installer.sls file
```
sudo vim salt/pillar/installer.sls
vim salt/pillar/installer.sls
```
Most used variables
-------------------
* user: user who will install the software
* proxy_secret: proxy secret key TODO
* secret_key: secret key TODO
* time zone: the server's time zone, format is region/city
* deployment_type: local or ? TODO
* **proxy_secret**: This is used to provide cryptographic signing, and should be set to a unique, unpredictable value.
* **secret_key**: This is used to provide cryptographic signing, and should be set to a unique, unpredictable value.
* **deployment_type**: local (development) or production
* **admin_user**: user name to login in as admin on the site
* **admin_pass**: password to login in as admin on the site
* database:
* name: django database's name
* user: database user
* password: database user's password
* amqp:
* user: amqp user
* password: ampq user's password
* host: amqp server IP - usually runs at localhost
* port: amqp server's port
* vhost: virtual host - specifies the namespace for entities (exchanges and queues) referred to by the protocol
* graphite:
* user: graphite user
* password: graphite user's password
* host: graphite server IP - usually runs at localhost
* port: graphite server's port
* vhost: TODO
* queue: TODO
* secret_key: graphite's secret key
* cache: cache url - usually pylibmc://127.0.0.1:11211/
* nfs:
* enabled: nfs is enabled
* server: nfs server's hostname
* network: nfs server's network to access files
* directory: this directory will be shared
* storagedriver:
* queue_name: TODO
* fwdriver:
* queue_name: the server's hostname
* gateway: the server's gateway
* EXTERNAL_Net: the server's network
* external_if: the server's network interface
* trunk_if: trunk interface TODO
* management_if: TODO
* **database**:
* **password**: database user’s password
* **amqp**:
* **password**: amqp user’s password
* **host**: amqp server IP - usually runs at localhost
* **graphite**:
* **password**: graphite user’s password
* **host**: graphite server IP - usually runs at localhost
* **nfs**:
* **enabled**: nfs is enabled
* **server**: nfs server’s hostname
* **network**: nfs server’s network to access files
* **directory**: this directory will be shared
* **storagedriver**:
* **queue_name**: the server’s hostname
* **fwdriver**:
* **queue_name**: the server’s hostname
* **gateway**: the server’s gateway
* **external_net**: the server’s network
* **external_if**: the server’s network interface
Other variables
---------------
* user: user who will install the software
* time zone: the server’s time zone, format is region/city
* amqp:
* user: amqp user
* port: amqp server’s port
* vhost: virtual host - specifies the namespace for entities (exchanges and queues) referred to by the protocol
* agent:
* repo_name: the agent repository's name
* repo_revision: revision
* repo_revision: revision
* agentdriver:
* repo_name: the agentdriver repository's name
* repo_revision: revision
* repo_revision: revision
* cache: cache url - usually pylibmc://127.0.0.1:11211/
* database:
* name: django database’s name
* user: database user
* fwdriver:
* repo_name: the fwdriver repository's name
* repo_revision: revision
* user: fwdriver user name
* vm_if: vm interface
* vm_et: vm network
* repo_revision: revision
* user: fwdriver user name
* vm_if: vm interface
* vm_et: vm network
* management_if: management interface
* reload_firewall_timeout: timeout for synchronous firewall reload
* graphite:
* user: graphite user
* port: graphite server’s port
* secret_key: graphite’s secret key
* manager:
* repo_name: the manager repository's name
* repo_revision: revision
* repo_revision: revision
* monitor-client:
* repo_name: the monitor-client repository's name
* repo_revision: revision
* repo_revision: revision
* storage-driver:
* repo_name: the storage-driver repository's name
* repo_revision: revision
* repo_revision: revision
* vm-driver:
* repo_name: the vm-driver repository's name
* repo_revision: revision
* repo_revision: revision
* vnc-driver:
* repo_name: the vnc-driver repository's name
* repo_revision: revision
* repo_revision: revision
## Install Circle
Run the following installation command:
```bash
sudo salt-call state.sls allinone
sudo salt-call state.sls allinone --local --file-root=/home/$USER/salt/salt --pillar-root=/home/$USER/salt/pillar
```
After this finished, you have to get "Failed: 0" message.
If installer fails, please visit the [Troubleshooting](#troubleshooting) paragraph.
After install, delete agent.conf file:
After install, delete agent.conf or agent.service file:
If you have upstart:
```bash
sudo rm -f /etc/init/agent.conf
```
Or if you have systemd:
```bash
sudo rm /etc/init/agent.conf
sudo rm -f /etc/systemd/system/agent.service
```
## Quickstart - Standalone Node
......@@ -172,3 +181,19 @@ To install an OS, we can use ISO images, to boot from. Click on 'download disk'
Finally, we can run the machine. Click on 'deploy' and start it. You can choose, on which node do you want to run.
![ubuntu 14.04](_static/images/ubuntu.png)
## Troubleshooting ##
### Portal won't load
Maybe port 443 is closed. Check and open it.
### Portal won't load on Ubuntu 14.04
```bash
sudo service nginx restart
```
### Cannot reach the internet on VM-s on distro from Red Hat family
```bash
sudo systemctl restart systemd-sysctl
```
......@@ -8,6 +8,8 @@ fwdriver:
vm_if: vm
vm_net: 192.168.2.254/24
vm_net_ip: 192.168.2.254
vm_net_mask: 255.255.255.0
management_if: eth5
management_net: 192.168.1.254/24
......@@ -15,3 +17,5 @@ fwdriver:
external_if: eth0
external_net: 10.0.0.97/16
gateway: 10.0.255.254
reload_firewall_timeout: 120
......@@ -34,7 +34,7 @@
#nfs:
# enabled: true
# server: 10.0.0.115
# server: 10.0.0.115
# network: 192.168.1.0/24
# directory: /datastore
......
manager:
repo_name: https://git.ik.bme.hu/circle/cloud.git
repo_revision: master
repo_revision: master
......@@ -13,7 +13,7 @@
- user: root
- group: root
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/agentdriver.service:
file.managed:
- user: root
......@@ -37,7 +37,7 @@ incrond:
incron:
{% endif %}
service:
- reload: true
- full_restart: true
- enable: true
- running
- watch:
......
......@@ -38,11 +38,4 @@ agentdriver:
- require_in:
- git: gitrepo_agentdriver
- virtualenv: virtualenv_agentdriver
service:
- running
- enable: true
- watch:
- pkg: agentdriver
- sls: agentdriver.gitrepo
- sls: agentdriver.virtualenv
- sls: agentdriver.configuration
include:
- profile
- agentdriver
- graphite
- manager
- graphite
- monitor-client
- storagedriver
- vmdriver
......
include:
- openvswitch
/home/{{ pillar['fwdriver']['user'] }}/.virtualenvs/fw/bin/postactivate:
file.managed:
- source: salt://fwdriver/files/postactivate
......@@ -6,19 +9,39 @@
- group: {{ pillar['fwdriver']['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/firewall.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['fwdriver']['user'] }}/fwdriver/miscellaneous/firewall.service
/etc/systemd/system/firewall-init.service:
file.managed:
- user: root
- group: root
- template: jinja
- source: salt://fwdriver/files/firewall-init.service
{% else %}
/etc/init/firewall.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['fwdriver']['user'] }}/fwdriver/miscellaneous/firewall.conf
/etc/init/firewall-init.conf:
file.managed:
- user: root
- group: root
- template: jinja
- source: file:///home/{{ pillar['fwdriver']['user'] }}/fwdriver/miscellaneous/firewall-init.conf
{% endif %}
/etc/dhcp:
file.directory:
- mode: 755
/etc/dhcp/dhcpd.conf:
file.managed:
......@@ -32,18 +55,12 @@
- user: {{ pillar['fwdriver']['user'] }}
- group: {{ pillar['fwdriver']['user'] }}
{% if grains['os_family'] != 'RedHat' and grains['os'] != 'Debian' %}
/etc/init.d/isc-dhcp-server:
file.symlink:
- target: /lib/init/upstart-job
- force: True
isc-dhcp-server:
service:
- running
- watch:
- file: /etc/dhcp/dhcpd.conf
- file: /etc/dhcp/dhcpd.conf.generated
- file: /etc/init.d/isc-dhcp-server
{% endif %}
/etc/sysctl.d/60-circle-firewall.conf:
file.managed:
......@@ -58,3 +75,34 @@ isc-dhcp-server:
- mode: 400
- template: jinja
- source: salt://fwdriver/files/sudoers
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
systemd-sysctl:
cmd.run:
- name: /bin/systemctl restart systemd-sysctl
service.running:
- watch:
- file: /etc/sysctl.d/60-circle-firewall.conf
- require:
- cmd: systemd-sysctl
{% endif %}
{% if grains['os_family'] == 'RedHat' %}
/root/firewall-init.te:
file.managed:
- source: salt://fwdriver/files/firewall-init.te
- template: jinja
- mode: 644
firewall-init_semodule:
cmd.run:
- cwd: /root
- user: root
- name: checkmodule -M -m -o firewall-init.mod firewall-init.te; semodule_package -o firewall-init.pp -m firewall-init.mod; semodule -i firewall-init.pp
- unless: semodule -l |grep -qs ^firewall-init
- require:
- file: /root/firewall-init.te
{% endif %}
[Unit]
Description=CIRCLE firewall init
After=network.target
#Before=firewall.service
[Service]
User=root
Group=root
Type=oneshot
ExecStart=/bin/bash -c "/bin/systemctl restart openvswitch{%if grains['os']=='Debian'%}-switch{%endif%} ; /sbin/ip netns add fw || true; ovs-vsctl del-br firewall || true; /sbin/ip netns exec fw sysctl -f /etc/sysctl.d/60-circle-firewall.conf ; /sbin/ip netns exec fw ip link set lo up"
[Install]
WantedBy=multi-user.target
module firewall-init 1.0;
require {
type ifconfig_t;
type ifconfig_var_run_t;
type virtio_device_t;
type root_t;
class dir mounton;
class chr_file { read write };
}
#============= ifconfig_t ==============
#!!!! This avc is allowed in the current policy
allow ifconfig_t ifconfig_var_run_t:dir mounton;
#!!!! This avc is allowed in the current policy
allow ifconfig_t root_t:dir mounton;
#!!!! This avc is allowed in the current policy
allow ifconfig_t virtio_device_t:chr_file { read write };
{{ pillar['fwdriver']['user'] }} ALL= (ALL) NOPASSWD: /sbin/ip netns exec fw ip addr *, /sbin/ip netns exec fw ip ro *, /sbin/ip netns exec fw ip link *, /sbin/ip netns exec fw ipset *, /usr/bin/ovs-vsctl, /sbin/ip netns exec fw iptables-restore -c, /sbin/ip netns exec fw ip6tables-restore -c, /etc/init.d/isc-dhcp-server restart, /sbin/ip link *, /sbin/iptables-restore -c, /sbin/ip6tables-restore -c, /sbin/ipset *
{{ pillar['fwdriver']['user'] }} ALL= (ALL) NOPASSWD: /sbin/ip netns exec fw ip addr *, /sbin/ip netns exec fw ip ro *, /sbin/ip netns exec fw ip link *, /sbin/ip netns exec fw ipset *, /usr/bin/ovs-vsctl, /sbin/ip netns exec fw iptables-restore -c, /sbin/ip netns exec fw ip6tables-restore -c, /etc/init.d/isc-dhcp-server restart, /sbin/ip link *, /sbin/iptables-restore -c, /sbin/ip6tables-restore -c, /sbin/ipset *, /bin/systemctl restart dhcpd
Defaults: fw !requiretty
......@@ -6,39 +6,45 @@ include:
firewall:
pkg.installed:
- pkgs:
{% if grains['os_family'] == 'RedHat' %}
- zlib-devel
- python-virtualenvwrapper
- python-devel
- libmemcached-devel
- dhcp
{% else %}
- zlib1g-dev
- virtualenvwrapper
- git
- python-pip
- python-dev
- libmemcached-dev
- ntp
- openvswitch-switch
{% if grains['os'] != 'Debian' %}
{# No such package in Debian Jessie! #}
- openvswitch-controller
{% endif %}
- isc-dhcp-server
{% endif %}
- git
- python-pip
- ntp
- iptables
- ipset
- isc-dhcp-server
- require:
- user: {{ pillar['fwdriver']['user'] }}
- require_in:
- git: gitrepo_fwdriver
- virtualenv: virtualenv_fwdriver
- service: isc-dhcp-server
user:
- present
- name: {{ pillar['fwdriver']['user'] }}
- gid_from_name: True
service:
- running
- enabled
- require:
- service: firewall-init
- watch:
- pkg: firewall
- sls: fwdriver.gitrepo
- sls: fwdriver.virtualenv
- sls: fwdriver.configuration
firewall-init:
service:
- running
- enabled
......@@ -10,13 +10,14 @@ postactivate:
requirements:
file.managed:
- name: /home/{{ pillar['graphite']['user'] }}/requirements.txt
- template: jinja
- source: salt://graphite/files/requirements.txt
- user: {{ pillar['graphite']['user'] }}
- group: {{ pillar['graphite']['user'] }}
- require:
- user: {{ pillar['graphite']['user'] }}
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/graphite.service:
file.managed:
......
......@@ -8,5 +8,7 @@ gunicorn
pytz
pyparsing
whisper
carbon
{% if grains['os_family'] != 'RedHat' %}
carbon==0.9.12
{% endif %}
graphite-web
{% if grains['os_family'] == 'RedHat' %}
python-carbon:
pkg.installed
{% endif %}
virtualenv_graphite:
virtualenv.managed:
- name: /home/{{ pillar['graphite']['user'] }}/.virtualenvs/graphite
......
......@@ -8,7 +8,7 @@ manager_postactivate:
portal.conf:
file.managed:
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- name: /etc/systemd/system/portal.service
{% else %}
- name: /etc/init/portal.conf
......@@ -16,21 +16,25 @@ portal.conf:
- user: root
- group: root
- template: jinja
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
{% if pillar['deployment_type'] == 'production' %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal-uwsgi.service
{% else %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal.service
{% endif %}
{% else %}
{% else %}
{% if pillar['deployment_type'] == 'production' %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal-uwsgi.conf
{% else %}
- source: file:///home/{{ pillar['user'] }}/circle/miscellaneous/portal.conf
{% endif %}
{% endif %}
{% if grains['os_family'] == 'RedHat' %}
{% endif %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/manager.service:
file.managed:
- user: root
......
......@@ -14,9 +14,6 @@ server {
alias /home/{{ pillar['user'] }}/circle/circle/static_collected; # your Django project's static files
}
{% endif %}
location /doc {
alias /home/cloud/circle-website/_build/html;
}
location / {
{% if pillar['deployment_type'] == "production" %}
......
module nginx 1.0;
require {
type initrc_tmp_t;
type httpd_t;
type initrc_t;
class sock_file write;
class unix_stream_socket connectto;
}
#============= httpd_t ==============
allow httpd_t initrc_t:unix_stream_socket connectto;
#!!!! This avc is allowed in the current policy
allow httpd_t initrc_tmp_t:sock_file write;
......@@ -61,7 +61,7 @@ manager:
- enable: True
- watch:
- file: manager_postactivate
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- file: /etc/systemd/system/manager.service
- file: /etc/systemd/system/managercelery@.service
{% else %}
......
nginx:
service.running:
- enable: True
- require:
- watch:
- pkg: nginx
- cmd: circlecert
- file: nginxdefault
- file: nginx_home_permission
{% if grains['os_family'] == 'RedHat' %}
- file: nginxconf
- cmd: nginx_no_private_temp
{% endif %}
pkg:
- installed
nginx_home_permission:
file.directory:
- name: /home/{{ pillar['user'] }}
- user: {{ pillar['user'] }}
- dir_mode: 711
circlecert:
cmd.run:
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' %}
- name: ./make-dummy-cert circle.pem
{% else %}
{% else %}
- name: openssl req -new -newkey rsa:2048 -days 365 -nodes -x509 -keyout circle.key -out circle.crt -subj '/CN=localhost/O=My Company Name LTD./C=US' && cat circle.key circle.crt > circle.pem && rm circle.key circle.crt; chmod 600 circle.pem
{% endif %}
{% endif %}
- cwd: /etc/ssl/certs/
- creates: /etc/ssl/certs/circle.pem
{% if grains['os_family'] == 'RedHat' %}
nginx_selinux:
nginx_selinux_pkgs:
pkg.installed:
- pkgs:
- policycoreutils
- policycoreutils-python
nginx_httpd_can_network_connect:
selinux.boolean:
- name: httpd_can_network_connect
- value: True
- persist: True
- require:
- pkg: nginx_selinux
- pkg: nginx_selinux_pkgs
nginx_httpd_read_user_content:
selinux.boolean:
- name: httpd_read_user_content
- value: True
- persist: True
- require:
- pkg: nginx_selinux_pkgs
/root/nginx.te:
file.managed:
- source: salt://manager/files/nginx.te
- template: jinja
- mode: 644
nginx_semodule:
cmd.run:
- cwd: /root
- user: root
- name: checkmodule -M -m -o nginx.mod nginx.te; semodule_package -o nginx.pp -m nginx.mod; semodule -i nginx.pp
- unless: semodule -l |grep -qs ^nginx
- require:
- file: /root/nginx.te
- pkg: nginx_selinux_pkgs
nginx_no_private_temp:
cmd.run:
- user: root
- name: sed -i "/PrivateTmp/d" /usr/lib/systemd/system/nginx.service
- require:
- pkg: nginx
{% endif %}
nginxdefault:
file.managed:
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' %}
- name: /etc/nginx/conf.d/default.conf
{% else %}
{% else %}
- name: /etc/nginx/sites-enabled/default
{% endif %}
{% endif %}
- template: jinja
- source: salt://manager/files/nginx-default-site.conf
- user: root
......
{% if grains['os'] == 'Ubuntu' %}
{% if grains['os'] == 'Ubuntu' or grains['os'] == 'Debian' %}
nodejs-legacy:
pkg.installed
{% endif %}
npm:
{% if grains['os'] == 'Ubuntu' %}
{% if grains['os'] == 'Ubuntu' or grains['os'] == 'Debian' %}
pkg.installed:
- require:
- pkg: nodejs-legacy
{% else %}
{% else %}
pkg.installed
{% endif %}
{% endif %}
bower:
npm.installed:
......
{% if grains['os_family'] == 'RedHat' %}
postgresql-server:
pkg.installed
postgresql_initdb:
cmd.run:
- cwd: /
......
......@@ -6,7 +6,7 @@
- group: {{ pillar['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/monitor-client.service:
file.managed:
- user: root
......
#!/bin/bash
sed -i '/HWADDR=.*/d' /etc/sysconfig/network-scripts/ifcfg-vm
sed -i -e \$aNM_CONTROLLED=\"no\" /etc/sysconfig/network-scripts/ifcfg-vm
/bin/systemctl daemon-reload
ifup vm
systemctl restart firewall
systemctl restart dhcpd
exit 0
# systemd service file extras added by CIRCLE Salt installer:
# openvswitch and virtual network interface must be up before
# dhcpd is started
[Unit]
After=openvswitch-switch.service
[Service]
ExecStartPre=-/sbin/ifup vm
{# TODO: change 'vm' to pillar['fwdriver']['vm_if'] ? #}
{# TODO: similar patch for firewall.service ? #}
NETWORKING_IPV6=yes
IPV6FORWARDING=yes
......@@ -2,4 +2,4 @@
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/activate
source /home/{{ pillar['user'] }}/.virtualenvs/circle/bin/postactivate
python /home/{{ pillar['user'] }}/circle/circle/manage.py reload_firewall
python /home/{{ pillar['user'] }}/circle/circle/manage.py reload_firewall --sync --timeout={{ pillar['fwdriver']['reload_firewall_timeout'] }}
......@@ -7,14 +7,32 @@ vm:
network.managed:
- enabled: True
- type: eth
- proto: static
- ipaddr: {{ pillar['fwdriver']['vm_net'].split('/')[0] }}
- netmask: {{ pillar['fwdriver']['vm_net'].split('/')[1] }}
- proto: none
- ipaddr: {{ pillar['fwdriver']['vm_net_ip'] }}
- netmask: {{ pillar['fwdriver']['vm_net_mask'] }}
- pre_up_cmds:
{% if grains['os_family'] == 'RedHat' %}
- /bin/systemctl restart openvswitch
{% elif grains['os'] == 'Debian' %}
- /bin/systemctl restart openvswitch-switch
{% else %}
- /etc/init.d/openvswitch-switch restart
{% endif %}
- require:
- cmd: ovs-if
{% if grains['os'] == 'Debian' %}
symlink_dhcpd:
file.symlink:
- name: /etc/init.d/dhcpd
- target: /etc/init.d/isc-dhcp-server
- force: True
cmd.run:
- name: /bin/systemctl daemon-reload
- require:
- file: symlink_dhcpd
{% endif %}
firewall2:
service:
- name: firewall
......@@ -22,9 +40,75 @@ firewall2:
- require:
- network: vm
salt://network/files/reload_firewall.sh:
reload_firewall:
cmd.script:
- name: salt://network/files/reload_firewall.sh
- template: jinja
- user: {{ pillar['user'] }}
- require:
- service: firewall2
{% if grains['os'] == 'Debian' %}
- cmd: symlink_dhcpd
{% endif %}
{% if grains['os_family'] == 'RedHat' %}
net_config:
file.managed:
- name: /etc/sysconfig/network
- source: salt://network/files/network
- user: root
- group: root
- mode: 644
fix_dhcp:
cmd.script:
- name: salt://network/files/fix_dhcp.sh
- require:
- cmd: reload_firewall
- file: net_config
{% endif %}
isc-dhcp-server:
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
cmd.run:
- name: /bin/systemctl restart dhcpd
{% if grains['os_family'] == 'RedHat' %}
- watch:
- cmd: fix_dhcp
{% elif grains['os'] == 'Debian' %}
- watch:
- cmd: fix_dhcp_daemon_reload
{% endif %}
{% endif %}
service.running:
- enable: True
{% if grains['os_family'] == 'RedHat' %}
- watch:
- cmd: fix_dhcp
{% elif grains['os'] == 'Debian' %}
- watch:
- cmd: fix_dhcp_daemon_reload
{% endif %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- name: dhcpd
- require:
- cmd: isc-dhcp-server
{% endif %}
{% if grains['os'] == 'Debian' %}
{# For next reboot #}
after_openvswitch_conf:
file.managed:
- name: /etc/systemd/system/isc-dhcp-server.service.d/after_openvswitch.conf
- source: salt://network/files/fix_dhcp_Debian.conf
- user: root
- group: root
- template: jinja
- makedirs: True
fix_dhcp_daemon_reload:
cmd.run:
- name: /bin/systemctl daemon-reload
- require:
- file: after_openvswitch_conf
{% endif %}
nfs-client:
pkg.installed:
- pkgs:
{% if grains['os_family'] == 'RedHat' %}
- nfs-utils
{% else %}
- nfs-common
{% endif %}
- require_in:
- mount: /datastore
......
include:
- profile
- agentdriver
- monitor-client
- vmdriver
......
{% if grains['os_family'] == "RedHat" %}
openvswitch:
pkg.installed:
- sources:
- openvswitch: salt://openvswitch/files/openvswitch-2.3.1-1.x86_64.rpm
cmd.run:
- name: mkdir /etc/openvswitch; restorecon -R /etc/openvswitch/
- creates: /etc/openvswitch
- require:
- pkg: openvswitch
service:
- name: openvswitch
- running
- enable: True
- require:
- cmd: openvswitch
- required_in:
- cmd: ovs-bridge
{% endif %}
{% if grains['os']=='Debian' %}
{# For non-interactive shells, virtualenvwrapper commands
('workon' etc.) are not sourced automatically #}
/etc/profile:
file.append:
- text:
- "#Line below added for Debian by CIRCLE Salt installer"
- . /etc/bash_completion
{% endif %}
......@@ -6,7 +6,7 @@
- group: {{ pillar['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/storagecelery@.service:
file.managed:
- user: root
......
{% if pillar['nfs']['enabled'] %}
nfs-server:
rpcbind:
pkg:
- installed
service:
- name: nfs-kernel-server
- running
- watch:
- file: /etc/exports
- require:
- pkg: rpcbind
nfs-server:
service:
{% if grains['os_family'] != 'RedHat' %}
- name: nfs-kernel-server
{% endif %}
- running
- watch:
- file: /etc/exports
- require:
- service: rpcbind
pkg.installed:
{% if grains['os_family'] == 'RedHat' %}
- name: nfs-utils
{% else %}
- name: nfs-kernel-server
{% endif %}
/etc/exports:
file.managed:
- template: jinja
......
include:
- openvswitch
/home/{{ pillar['user'] }}/.virtualenvs/vmdriver/bin/postactivate:
file.managed:
- source: salt://vmdriver/files/postactivate
......@@ -6,9 +9,10 @@
- group: {{ pillar['user'] }}
- mode: 700
{% set service_dir = "/etc/systemd/system/" if grains['os_family'] == 'RedHat' else "/etc/init/" %}
{% set service_dir = "/etc/systemd/system/" if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' else "/etc/init/" %}
{% set service_files = (("vmcelery@.service", "netcelery@.service", "node.service")
if grains['os_family'] == 'RedHat' else
if grains['os_family'] == 'RedHat'
or grains['os'] == 'Debian' else
("vmcelery.conf", "netcelery.conf", "node.conf")) %}
{% for file in service_files %}
......@@ -20,25 +24,6 @@
- source: file:///home/{{ pillar['user'] }}/vmdriver/miscellaneous/{{ file }}
{% endfor %}
{% if grains['os_family'] == 'RedHat' %}
openvswitch:
pkg.installed:
- sources:
- openvswitch: salt://vmdriver/files/openvswitch-2.3.1-1.x86_64.rpm
cmd.run:
- name: mkdir /etc/openvswitch; restorecon -R /etc/openvswitch/
- creates: /etc/openvswitch
- require:
- pkg: openvswitch
service:
- running
- enable: True
- require:
- cmd: openvswitch
- required_in:
- cmd: ovs-bridge
{% endif %}
ovs-bridge:
cmd.run:
- name: ovs-vsctl add-br cloud
......
[Allow cloud libvirt management permissions]
Identity=unix-user:cloud
Action=org.libvirt.unix.manage;org.libvirt.unix.monitor
ResultAny=yes
ResultInactive=yes
ResultActive=yes
{# TODO: change 'cloud' to ? #}
{{ pillar['user'] }} ALL = (ALL) NOPASSWD: /usr/bin/ovs-ofctl, /usr/bin/ovs-vsctl, /sbin/ip link set *
Defaults: {{ pillar['user'] }} !requiretty
module vmdriver 1.0;
module vmdriver 1.1;
require {
type virt_var_lib_t;
type svirt_tcg_t;
type svirt_t;
type default_t;
class sock_file { create unlink };
class dir { write remove_name add_name };
class lnk_file read;
}
#============= svirt_tcg_t ==============
allow svirt_tcg_t virt_var_lib_t:dir { write remove_name add_name };
allow svirt_tcg_t virt_var_lib_t:sock_file { create unlink };
#============= svirt_t ==============
allow svirt_t virt_var_lib_t:dir { write add_name };
allow svirt_t virt_var_lib_t:sock_file create;
allow svirt_t default_t:lnk_file read;
......@@ -32,7 +32,10 @@ vmdriver:
- libxslt1-dev
- openvswitch-common
- openvswitch-switch
{% if grains['os'] != 'Debian' %}
{# No such package in Debian Jessie! #}
- openvswitch-controller
{% endif %}
- python-dev
- python-libvirt
- virtualenvwrapper
......@@ -41,7 +44,7 @@ vmdriver:
{% endif %}
- require_in:
- file: /etc/default/libvirt-bin
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
- service: libvirtd
{% else %}
- file: /etc/apparmor.d/libvirt/TEMPLATE
......@@ -52,6 +55,18 @@ vmdriver:
- augeas: libvirtconf
- git: gitrepo_vmdriver
- virtualenv: virtualenv_vmdriver
agentdriver_service:
service:
- name: agentdriver
- running
- enable: true
- watch:
- pkg: agentdriver
- sls: agentdriver.gitrepo
- sls: agentdriver.virtualenv
- sls: agentdriver.configuration
node:
service:
- running
......
augeas_dependency:
pkg.installed:
- pkgs:
- python-augeas
libvirtconf:
augeas.change:
- context: /files/etc/libvirt/libvirtd.conf
......@@ -10,7 +15,7 @@ libvirtconf:
file.append:
- text: libvirtd_opts="-d -l"
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
libvirtd:
{% else %}
libvirt-bin:
......@@ -60,6 +65,32 @@ vmdriver_semodule:
- file: /root/vmdriver.te
- pkg: selinux_pkgs
{% elif grains['os'] == 'Debian' %}
/usr/bin/kvm:
file.replace:
- pattern: -enable-kvm
- repl: ""
- watch:
- pkg: vmdriver
policycoreutils:
pkg.installed
{# Note: Debian Jessie has polkit 0.105, which uses pkla format instead of js #}
/etc/polkit-1/localauthority/50-local.d/org.libvirt.unix.manage.pkla:
file.managed:
- source: salt://vmdriver/files/org.libvirt.unix.manage.pkla
- user: root
- group: root
- template: jinja
polkitd:
service:
- running
- watch:
- file: /etc/polkit-1/localauthority/50-local.d/org.libvirt.unix.manage.pkla
{% else %}
/etc/apparmor.d/libvirt/TEMPLATE:
......
......@@ -7,10 +7,14 @@ virtualenv_vmdriver:
{% set libvirt_dir = "/usr/lib64/python2.7/site-packages/" if grains['os_family'] == 'RedHat' else "/usr/lib/python2.7/dist-packages/" %}
{% set targets = { 'libvirtmod_qemu.so': 'libvirtmod_qemu.x86_64-linux-gnu.so',
'libvirtmod.so': 'libvirtmod.x86_64-linux-gnu.so'
} if grains['os'] == 'Debian' else {} %}
{% for file in ("libvirtmod_qemu.so", "libvirtmod.so", "libvirt_qemu.py", "libvirt.py", "libvirt_qemu.pyc", "libvirt.pyc") %}
/home/{{ pillar['user'] }}/.virtualenvs/vmdriver/lib/python2.7/site-packages/{{ file }}:
file.symlink:
- target: {{ libvirt_dir + file }}
- target: {{ libvirt_dir + targets[file]|default(file) }}
- require:
- virtualenv: virtualenv_vmdriver
{% endfor %}
......@@ -6,7 +6,7 @@
- group: {{ pillar['user'] }}
- mode: 700
{% if grains['os_family'] == 'RedHat' %}
{% if grains['os_family'] == 'RedHat' or grains['os'] == 'Debian' %}
/etc/systemd/system/vncproxy.service:
file.managed:
- user: root
......
Markdown is supported
0% or
You are about to add 0 people to the discussion. Proceed with caution.
Finish editing this message first!
Please register or sign in to comment