Browse Source

Added more functions for the vagrant sub project

Change-Id: I6032c3397dbee5d56dc43293eae42345c1ca4059
Tong Li 3 years ago
parent
commit
f10cb11b7d

+ 0
- 2
README.md View File

@@ -281,8 +281,6 @@ to be done.
281 281
 3. Edit /etc/kiloeyes/kiloeyes.conf file to configure the middleware::
282 282
 
283 283
         [keystone_authtoken]
284
-        auth_uri = http://<<keystone_ip>>:5000
285
-        auth_url = http://<<keystone_ip>>:5000
286 284
         identity_uri = http://<<keystone_ip>>:5000
287 285
 
288 286
         auth_type = token

+ 1
- 1
etc/kiloeyes.conf View File

@@ -4,7 +4,7 @@
4 4
 log_file=api.log
5 5
 log_dir=/var/log/kiloeyes/
6 6
 log_level=DEBUG
7
-default_log_levels = kiloeyes=DEBUG
7
+default_log_levels = kiloeyes=DEBUG,keystonemiddleware=DEBUG
8 8
 
9 9
 dispatcher = metrics
10 10
 dispatcher = versions

+ 73
- 13
vagrant/README.md View File

@@ -25,18 +25,78 @@ required software to run kiloeyes.
25 25
 Usage:
26 26
 ======
27 27
 You can install everything onto one machine or you can choose install different
28
-components onto different servers. There can be a lot of ways to split up
29
-servers for different services. Here is an example:
28
+components onto different servers. Currently python-keystonemiddleware which
29
+is used by kiloeyes for security, but its dependencies conflict with agent
30
+dependcies, so kiloeyes currently can not co-exist with agent on a signle
31
+machine. It is best to have kiloeyes and agent installed onto the separate
32
+machines to avoid the installation headaches. This vagrant project uses
33
+configuration files in directory vagrant/onvm/conf. File nodes.conf.yml is
34
+used to configure how many nodes to install various components, ids.conf.yml
35
+file is used to save credentials.
30 36
 
31
-        controller:
32
-            java
33
-            elasticsearch
34
-            kibana
35
-            kiloeyes
36
-        devstack:
37
-            OpenStack environment
38
-        agent01:
39
-            agent
37
+Here is an example::
38
+
39
+    controller:
40
+        host_name: controller.leap.dev
41
+        eth0: 192.168.1.90
42
+
43
+    agent01:
44
+        host_name: agent01.leap.dev
45
+        eth0: 192.168.1.88
46
+
47
+    logical2physical:
48
+        kiloeyes: controller
49
+        elastic: controller
50
+        kafka: controller
51
+
52
+    ctlnodes:
53
+        - elastic
54
+        - kafka
55
+        - kiloeyes
56
+
57
+    agentes:
58
+        - agent01
59
+
60
+
61
+Above configuration, indicates that there are total of 4 logical nodes, they
62
+are elastic, kafka, kiloeyes and agent01. The installation sequence is in
63
+order of elastic, kafka, kiloeyes and agent01, the ctlnodes section indicates
64
+that sequence, ctlnodes will be always installed before agent nodes. The
65
+section logical2physical node indicates how a logical node maps to a physical
66
+machine, in the above example, 3 logical nodes (elastic, kafka and kiloeyes)
67
+are all mapped to a physical node called controller, which gets defined by its
68
+ip address and a name. agent01 is also defined by using its ip and name. From
69
+this setup, you can install elastic, kafka and kiloeyes onto different
70
+machines.
71
+
72
+Since the agent was specifically developed to work with openstack security,
73
+without openstack running somewhere, it will be pretty pointless to setup
74
+agent. The best way to set the whole thing up, is to following the following
75
+steps::
76
+
77
+1. Prepare 3 machines, either physical or virtual machines should work fine.
78
+2. Install DevStack onto the first machine and configure the keystone url and
79
+   userid and password in nodes.conf.yml file. If you already have a OpenStack
80
+   system running, you can use that system as well, simply configure 
81
+   nodes.conf.yml file using the right keystone auth url and credentials.
82
+3. Find out the second and third machine IPs and fill the IPs in the
83
+   nodes.conf.yml file, use the second machine for controller and the third
84
+   for agent.
85
+4. Make sure that you have the same password for the root user for the second
86
+   and third machine. Place the user name and password in file ids.conf.yml.
87
+   Also make sure that the server has ssh turned on so that vagrant can run
88
+   successfully.
89
+5  Kiloeyes depend on java, elastic search and kafka. This vagrant project will
90
+   install these components onto the machine you specified in the conf file,
91
+   but you will have to download these binaries into a directory which will be
92
+   located in the same directory kiloeyes root resides. The structure is indicated
93
+   above in introduction section.
94
+6. Change to vagrant directory and now run the following two commands::
95
+
96
+        vagrant up
97
+        vagrant provision
98
+7. If all goes well, you should have everything running successfully, after
99
+   awhile, agent should be sending messages to kiloeyes and the data should be
100
+   available in elasticsearch and can be seen by using kibana::
40 101
 
41
-To indicate how the servers will be used, please edit configuration file in
42
-vagrant/onvm/conf/nodes.conf.yml and ids.conf.yml file.
102
+        http://192.168.1.90:5601

+ 18
- 21
vagrant/onvm/conf/nodes.conf.yml View File

@@ -1,17 +1,8 @@
1 1
 ---
2
-repo:
3
-  host_name: repo.leap.dev
4
-  eth0: 192.168.1.88
5
-  eth1: 192.168.1.88
6
-
7 2
 controller:
8 3
   host_name: controller.leap.dev
9 4
   eth0: 192.168.1.90
10 5
 
11
-devstack:
12
-  host_name: devstack.leap.dev
13
-  eth0: 192.168.1.93
14
-
15 6
 agent01:
16 7
   host_name: agent01.leap.dev
17 8
   eth0: 192.168.1.88
@@ -20,27 +11,33 @@ logical2physical:
20 11
   kiloeyes: controller
21 12
   elastic: controller
22 13
   kafka: controller
23
-  devstack: controller
24 14
 
25 15
 
26 16
 # Define how many logical nodes and the sequence of the installation
27 17
 ctlnodes:
28
-  - devstack
29 18
   - elastic
30 19
   - kafka
31 20
   - kiloeyes
32 21
 
22
+# Define how many agents should be installed.
33 23
 agentes:
34
-#  - agent01
35
-#  - agent02
36
-
37
-uselocalrepo: yes
38
-
39
-aptopt: --force-yes
40
-
41
-
42
-# The nodes should be a list of logical name
43
-# The folder should be a local directory start from the project root
24
+  - agent01
25
+
26
+# This section defines the OpenStack credentials which will be used to
27
+# create services and users to configure kiloeyes and agent.
28
+# security_on determines if keystone middleware should be pluged into the
29
+# kiloeye pipeline.
30
+# no security is turned on.
31
+security_on: true
32
+auth_uri: http://192.168.15.5:5000
33
+admin_user: admin
34
+admin_pw: ps
35
+agent_user: kiloeyesagent
36
+agent_pw: ps
37
+
38
+# The nodes should be a list of logical name and should appear in the ctlnodes
39
+# The source should be a local directory relative to the vagrant directory
40
+# The target should be a directory on the target system.
44 41
 synchfolders:
45 42
   elastic:
46 43
     source: ./../../leapbin

+ 36
- 1
vagrant/onvm/scripts/install-agent.sh View File

@@ -5,8 +5,43 @@
5 5
 source /onvm/scripts/ini-config
6 6
 eval $(parse_yaml '/onvm/conf/nodes.conf.yml' 'leap_')
7 7
 
8
-#apt-get update
8
+# Install git in case it has not been installed.
9
+apt-get update
10
+apt-get -qqy install git python-dev python-pip
9 11
 
12
+git clone https://github.com/openstack/monasca-agent.git /opt/monasca-agent
13
+
14
+cd /opt/monasca-agent
15
+
16
+# Make sure few required things installed first
17
+pip install "requests>=2.9.1"
18
+pip install "psutil>=3.4.2"
19
+pip install -r requirements.txt
20
+
21
+python setup.py install
22
+
23
+echo 'Setting up agent by running monasca-setup...'
24
+monasca-setup --username $leap_agent_user \
25
+  --password $leap_agent_pw \
26
+  --project_name kiloeyes \
27
+  --system_only --keystone_url "${leap_auth_uri}/v3"
28
+
29
+echo 'Configuring supervisor.conf file...'
30
+iniset /etc/monasca/agent/supervisor.conf inet_http_server port 'localhost:9001'
31
+
32
+rm -r -f /etc/monasca/agent/conf.d/vcenter.yaml
33
+
34
+# The following section is to prepare for manual installation
35
+#mkdir -p /etc/monasca/agent/conf.d
36
+#
37
+#cp agent.yaml.template /etc/monasca/agent/agent.yaml
38
+#
39
+# Get the plugin configuration files
40
+#for key in cpu disk load memory network; do
41
+#  cp conf.d/$key.yaml /etc/monasca/agent/conf.d
42
+#done
43
+
44
+service monasca-agent restart
10 45
 
11 46
 echo 'Agent install is now complete!'
12 47
 

+ 51
- 1
vagrant/onvm/scripts/install-kiloeyes.sh View File

@@ -18,13 +18,63 @@ eval node_ip=\$leap_${leap_logical2physical_kafka}_eth0; node_ip=`echo $node_ip`
18 18
 kafka_ip=$node_ip
19 19
 eval node_ip=\$leap_${leap_logical2physical_elastic}_eth0; node_ip=`echo $node_ip`
20 20
 elastic_ip=$node_ip
21
-
21
+eval node_ip=\$leap_${leap_logical2physical_kiloeyes}_eth0; node_ip=`echo $node_ip`
22
+kiloeyes_ip=$node_ip
22 23
 
23 24
 k_log_dir='/var/log/kiloeyes'
24 25
 k_pid_dir='/var/run/kiloeyes'
25 26
 mkdir -p $k_log_dir $k_pid_dir
26 27
 
27 28
 # Config the kiloeyes
29
+
30
+# If security_on, then we need to configure the keystone middleware
31
+if [ $leap_security_on='true' ]; then
32
+
33
+  echo 'Install keystone middleware...'
34
+  apt-get -qqy install software-properties-common
35
+  add-apt-repository -y cloud-archive:liberty
36
+  apt-get update
37
+  apt-get -qqy install python-keystonemiddleware
38
+
39
+  iniset /etc/kiloeyes/kiloeyes.ini 'pipeline:main' 'pipeline' 'authtoken api'
40
+  iniset /etc/kiloeyes/kiloeyes.ini 'filter:authtoken' 'paste.filter_factory' 'keystonemiddleware.auth_token:filter_factory'
41
+  iniset /etc/kiloeyes/kiloeyes.ini 'filter:authtoken' 'delay_auth_decision'  false
42
+
43
+  iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken identity_uri $leap_auth_uri
44
+  iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken auth_type token
45
+  iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken admin_user $leap_admin_user
46
+  iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken admin_password $leap_admin_pw
47
+  iniset /etc/kiloeyes/kiloeyes.conf keystone_authtoken admin_tenant_name admin
48
+fi
49
+
50
+# if auth_uri is configured, then we need to create these services and users
51
+if [ ! -z $leap_auth_uri ]; then
52
+
53
+  apt-get -qqy install software-properties-common
54
+  add-apt-repository -y cloud-archive:liberty
55
+  apt-get update
56
+  apt-get -qqy install python-openstackclient
57
+  # Setup environment variables
58
+  export OS_USERNAME=$leap_admin_user
59
+  export OS_PASSWORD=$leap_admin_pw
60
+  export OS_TENANT_NAME=admin
61
+  export OS_AUTH_URL="${leap_auth_uri}/v3"
62
+  export OS_IDENTITY_API_VERSION=3
63
+
64
+  # if the service and user have not setup, we will go ahead set them up
65
+  openstack service list | grep monitoring
66
+  if [ $? -gt 0 ]; then
67
+    openstack service create --name kiloeyes --description "Monitoring" monitoring
68
+    openstack endpoint create --region RegionOne monitoring public http://$kiloeyes_ip:9090/v2.0
69
+    openstack endpoint create --region RegionOne monitoring admin http://$kiloeyes_ip:9090/v2.0
70
+    openstack endpoint create --region RegionOne monitoring internal http://$kiloeyes_ip:9090/v2.0
71
+
72
+    openstack project create --domain default --description "Kiloeyes Project" kiloeyes
73
+    openstack user create --domain default --password $leap_agent_pw $leap_agent_user
74
+    openstack role add --project kiloeyes --user $leap_agent_user member
75
+  fi
76
+fi
77
+
28 78
 echo 'Config /etc/kiloeyes/kiloeyes.conf file...'
29 79
 iniset /etc/kiloeyes/kiloeyes.conf DEFAULT log_dir $k_log_dir
30 80
 iniset /etc/kiloeyes/kiloeyes.conf kafka_opts uri $kafka_ip:9092

+ 2
- 2
vagrant/rebootvm.sh View File

@@ -1,7 +1,7 @@
1
-#VBoxManage snapshot h2-compute01 restore "Snapshot 3"
1
+VBoxManage snapshot h2-compute01 restore "Snapshot 3"
2 2
 #VBoxManage snapshot h2-nova restore "Snapshot 3"
3 3
 VBoxManage snapshot h2-controller restore "Snapshot 3"
4 4
 
5
-#vboxmanage startvm h2-compute01 --type headless
5
+vboxmanage startvm h2-compute01 --type headless
6 6
 #vboxmanage startvm h2-nova --type headless
7 7
 vboxmanage startvm h2-controller --type headless

+ 1
- 1
vagrant/stvm.sh View File

@@ -1,3 +1,3 @@
1
-#vboxmanage controlvm h2-compute01 acpipowerbutton
1
+vboxmanage controlvm h2-compute01 acpipowerbutton
2 2
 #vboxmanage controlvm h2-nova acpipowerbutton
3 3
 vboxmanage controlvm h2-controller acpipowerbutton

Loading…
Cancel
Save