Monday, January 22, 2018

Restart A service if new rpm installed.

 hosts: nodes
  tasks:
    name: Check package version
      shell: dpkg -s apache2 | grep -i version | awk -F ":" '{ print $2 }' | awk -F "-" '{ print $1 }'
      become: yes
      register: pkgversion
    debug: var=pkgversion
    name: Restart the service
      service: name=apache2 state=restarted
      become: yes
      when: '"2.4.18" in "{{ pkgversion.stdout }}"'



---
  - hosts: all
    tasks:
    - name: Check RPM version
      shell: /usr/bin/yum list httpd | grep httpd | awk '{print $2}' | cut -d'-' -f1
      register: installedver
    - name: Create NOT_UPDATED File
      local_action: lineinfile path=/home/test/books/check_rpm/NOT_UPDATED line="||HOSTNAME||INSTALLED_VERSION||" insertbefore=BOF state=present create=yes
    - name: Create UPDATED File
      local_action: lineinfile path=/home/test/books/check_rpm/UPDATED line="||HOSTNAME||INSTALLED_VERSION||PID_BR||PID_AR||RESTART_STAT||" insertbefore=BOF state=present create=yes
    - name: If installed version is as required take PID before restart
      shell: ps -ef | grep feedprocessor | grep -v grep | awk '{print $2}'
      register: pidbr
      when: installedver.stdout == '1.7.0_2'
    - name: Restart service after taking PID
      copy:
       content: "hello world\n"
       dest: /home/test/testfile
      notify: update_status
      when: installedver.stdout == '1.7.0_2'
    - name: If installed version is as required take PID before restart
      shell: ps -ef | grep httpd | grep -v grep | awk '{print $2}'
      register: pidar
    - name: Update to NOT_INSTALLED file when package is not latest
      local_action: lineinfile path=/home/test/books/check_rpm/NOT_UPDATED line="{{ ansible_fqdn }}|{{ installedver.stdout }}" insertafter=EOF state=present create=yes
      when: installedver.stdout != '1.7.0_2'
    handlers:
    - name: update_status
      local_action: lineinfile path=/home/test/books/check_rpm/UPDATED line="|{{ ansible_fqdn }}|{{ installedver.stdout }}|{{ pidbr.stdout }}|{{ pidar.stdout }}|restarted|" insertafter=EOF state=present

...






||HOSTNAME||INSTALLED_VERSION||PID_BR||PID_AR||RESTART_STAT||
|ansic1.example.com|2.4.6|5448|5721|restarted|
|ansic2.example.com|2.4.6|4640|4913|restarted|
|ansic1.example.com|2.4.6|5721|5988|restarted|
|ansic2.example.com|2.4.6|4913|5185|restarted|
[root@ansim0 ~]# cat rpm_check.yaml
---
  - hosts: http
    tasks:
    - name: Check RPM version
      shell: rpm -qa | grep httpd-2 | cut -d'-' -f2
      register: installedver
    - name: Create NOT_UPDATED File
      local_action: lineinfile dest=/root/NOT_UPDATED line="||HOSTNAME||INSTALLED_VERSION||" insertbefore=BOF state=present create=yes
    - name: Create UPDATED File
      local_action: lineinfile dest=/root/UPDATED line="||HOSTNAME||INSTALLED_VERSION||PID_BR||PID_AR||RESTART_STAT||" insertbefore=BOF state=present create=yes
    - name: If installed version is as required take PID before restart
      shell: ps -ef | grep httpd | grep -v grep | head -n 1 |awk '{print $2}'
      register: pidbr
      when: installedver.stdout == '2.4.6'
    - name: Restart service after taking PID
      service:
       name: httpd
       state: restarted
      notify: update_status
      when: installedver.stdout == '2.4.6'
    - name: If installed version is as required take PID before restart
      shell: ps -ef | grep httpd | grep -v grep | head -n 1 |awk '{print $2}'
      register: pidar
    - name: Update to NOT_INSTALLED file when package is not latest
      local_action: lineinfile dest=/root/NOT_UPDATED line="{{ ansible_fqdn }}|{{ installedver.stdout }}" insertafter=EOF state=present create=yes
      when: installedver.stdout != '2.4.6'
    handlers:
    - name: update_status
      local_action: lineinfile dest=/root/UPDATED line="|{{ ansible_fqdn }}|{{ installedver.stdout }}|{{ pidbr.stdout }}|{{ pidar.stdout }}|restarted|" insertafter=EOF state=present




...

Monday, January 8, 2018

Ansible Dry Run

Ansible has a option to run it in dry mode. (i.e) Without executing it will show what it is going to execute or change.


# ansible-playbook myplaybook.yaml --check

Wednesday, September 6, 2017

Serial : Limit the number of Parallel processing

By default, Ansible will try to manage all of the machines referenced in a play in parallel. For a rolling updates use case, you can define how many hosts Ansible should manage at a single time by using the ‘’serial’’ keyword:

- name: test play
  hosts: webservers
  serial: 3
In the above example, if we had 100 hosts, 3 hosts in the group ‘webservers’ would complete the play completely before moving on to the next 3 hosts.

The ‘’serial’’ keyword can also be specified as a percentage in Ansible 1.8 and later, which will be applied to the total number of hosts in a play, in order to determine the number of hosts per pass:

- name: test play
  hosts: webservers
  serial: "30%"
If the number of hosts does not divide equally into the number of passes, the final pass will contain the remainder.

As of Ansible 2.2, the batch sizes can be specified as a list, as follows:

- name: test play
  hosts: webservers
  serial:
  - 1
  - 5
  - 10
In the above example, the first batch would contain a single host, the next would contain 5 hosts, and (if there are any hosts left), every following batch would contain 10 hosts until all available hosts are used.

It is also possible to list multiple batche sizes as percentages:

- name: test play
  hosts: webservers
  serial:
  - "10%"
  - "20%"
  - "100%"
You can also mix and match the values:

- name: test play
  hosts: webservers
  serial:
  - 1
  - 5
  - "20%"

Maximum Failure Percentage

By default, Ansible will continue executing actions as long as there are hosts in the group that have not yet failed. In some situations, such as with the rolling updates described above, it may be desirable to abort the play when a certain threshold of failures have been reached. To achieve this, as of version 1.3 you can set a maximum failure percentage on a play as follows:

- hosts: webservers
  max_fail_percentage: 30
  serial: 10

In the above example, if more than 3 of the 10 servers in the group were to fail, the rest of the play would be aborted.

Note : The percentage set must be exceeded, not equaled. For example, if serial were set to 4 and you wanted the task to abort when 2 of the systems failed, the percentage should be set at 49 rather than 50.

Difference between Serial and Forks

playbooks run in a number of hosts in paralel, but the tasks are in lockstep, all hosts complete task #1 before going to task #2, but each task is run in parallel in a number of hosts == number of forks defined (default 5). So with forks = 5 each task will be done in parallel on 5 hosts at a time until all hosts are done.

serial controls how many hosts go in each batch for the full play, so if you do serial = 5, 5 hosts will do each task in lockstep until end of the play, then next 5 hosts start the play.


The output is serialized to make the console readable. In 2.0 we introduce strategies that control play executions, so the default (linear) behaves as per above, a new one called 'free' allows for each parallel host to run to the end of play w/o waiting for the other hosts to complete the same task.


Serial overrides the forks setting.

Monday, August 14, 2017

Looping in ansible

If you have defined a YAML list in a variables file, or the ‘vars’ section, you can also do:
with_items: "{{ somelist }}"

Looping over file

---
 - hosts: ss
   tasks:
   - name: check the contents of a file and echo it
     debug: msg="{{item}}"
     with_file:
      - /home/bhr_moham607/test1.txt
      - /home/bhr_moham607/test2.txt

will loop and echo the message if only both the files are available. Also these files should be in master

Looping with Fileglob

with_fileglob matches all files in a single directory, non-recursively, that match a pattern. It calls Python’s glob library, and can be used like this:

---
- hosts: all
  tasks:
    # first ensure our target directory exists
    - name: Ensure target directory exists
      file:
        dest: "/etc/fooapp"
        state: directory

    # copy each file over that matches the given pattern
    - name: Copy each file over that matches the given pattern
       copy:
        src: "{{ item }}"
        dest: "/etc/fooapp/"
        owner: "root"
        mode: 0600
        with_fileglob:
         - "/playbooks/files/fooapp/*"

Tuesday, August 8, 2017

Difference between Import and Include

Import

All import* statements are pre-processed at the time playbooks are parsed. If you use any import* Task (import_playbook, import_tasks, etc.), it will be static. 

For static imports, the parent task options will be copied to all child tasks contained within the import.

Also in import it executes the code  and substitutes the result into the program. And no code is copied like include and hence no waste of memory or processor’s time.

With import loops cannot be used at all.

Include

# include, replace it with the contents of the file, and continue. All include* statements are processed as they encountered during the execution of the playbook. If you use any include* Task (include_tasks, include_role, etc.), it will be dynamic.

For dynamic includes, the task options will only apply to the dynamic task as it is evaluated, and will not be copied to child tasks.

Tags & Tasks which only exist inside a dynamic include will not show up in –list-tags output & –list-tasks output. You cannot use notify to trigger a handler name which comes from inside a dynamic include You cannot use --start-at-task to begin execution at a task inside a dynamic include.

When using variables for the target file or role name, variables from inventory sources (host/group vars, etc.) cannot be used.

Example for Import and Include

[root@ansim0 ymls]# cat Inc.yml
---
 - hosts: test
   tasks:
    - include: test.yml
      with_items: [ 1,2,3,4]

...

TASK [include] *****************************************************************
included: /root/ymls/test.yml for ansic2
included: /root/ymls/test.yml for ansic2
included: /root/ymls/test.yml for ansic2
included: /root/ymls/test.yml for ansic2

TASK [testing for include and import] ******************************************
ok: [ansic2] => {
    "msg": 1

}



Friday, August 4, 2017

Ansible Filters

Filters in Ansible are from Jinja2, and are used to do some filter operations with variable and with other items also.

Below are some of the filters with example.

1) Mandatory

When we use this setting and if the variable is not set then we will get the error like mentioned below.

---
 - hosts: ss
   tasks:
   - name: Check Mandotry filter setting
     debug: msg={{ variable | mandatory }}
...

Result:
FAILED! => {"failed": true, "msg": "Mandatory variable not defined."}

2) Default filter

{{ variable1 | default(56) }}

When this setting is used and if the variable is not set then the value of the variable will be replaced with the default value mentioned int the filter rather than the error being raised.

3) Default with Omit module

If we leave default filter value as NULL it will cause the chain of filters like “{{ variable | default(None) | Second_filter or omit }}” to fail. So it is better to have the value as omit if the value is NULL.

In the result we can see that only for file INDIA we have permission 444 for others where mode value is not set in the variable declaration section it took the default permission.

---
 - hosts: ss
   tasks:
   - name: Test Default filter with omit module
     file: dest={{item.path}} state=touch mode={{item.mode|default(omit)}}
     with_items:
     - path: /tmp/I
     - path: /tmp/LOVE
     - path: /tmp/INDIA
       mode: "0444"
...

-rw-------  1 moham607 users          0 Jul 18 06:21 I
-rw-------  1 moham607 users          0 Jul 18 06:21 LOVE

-r--r--r--  1 moham607 users          0 Jul 18 06:21 INDIA

4) Using IP filter 

Use IP filter to check is the variable value is a valid IP v4 address or not. But to use this filter we need python-netaddr package to be installed.

{{ myIP | ipaddr }}
{{ myIP | ipv4 }}
{{ myIP | ipv6 }}

---
 - hosts: ss
   tasks:
     - name: Take the IP of server using facts and check with the help of IP filter and print
       debug: msg="IP address of server is {{facter_ipaddress_eth2}}"
       when: (facter_ipaddress_eth2 | ipv4)
      (or)
      when: (ansible_eth3.ipv4.address | ipv4)
...


Above Task will print the message with IP address when the value of facter_ipaddress_eth2 is a valid ipv4 IP address.

Sunday, July 23, 2017

failed_when Module

Failed When

When failed_when is not used, ansible will consider that task as failed for that particular task and will ignore all other tasks from running. Basically this is used for idempotency. 

Eg. If there is a task to create a DB and if the DB already exists then it will throw an error. In this case it is better to use failed_when. If failed_when is not used then the task will exit with the error. Which will lead other tasks not to run.

Another Example with file deletion. Hope this will make more clarification.

Below task will delete a file. (FYI.. this can be done with file module also for better understanding explaining with the help of this module)

---
 - hosts: ss
   tasks:
    - name: to check failed when
      command: rm /tmp/file1
      register: testoutput
      failed_when: "'Operation not permitted' in testoutput.stderr"
    - name:
      command: echo hi
...

We are using failed_when for idempotency in above task. When we run the above yaml for the first run it will remove the file1 but if the task is ran again then it will throw an error "No such file of directory" which is ignorable. 

If failed_when is not used above task will fail at this level and it wont continue. But when the error is permission denied - Operation not permitted we need to consider that error. So we are insisting the task that it is considered as failed only when we get "Operation not pemitted" error and other errors are ignorable and can be considered as success.

Monday, July 17, 2017

Examples for all variables section

1) Using variable and retriving

Use variable to and retrive it to create a file

---
 - hosts: ss
   vars:
    file: test1
   tasks:
    - name: Create file using varaibel name
      file: path=/home/moham/{{ file }} state=touch
...

This way of calling using curley braces is known as jinja2 method.

2) Using facts

We already know what is fact and how to get facts using setup module. But here is the example on how to use facts and retirive its value.

---
 - hosts: ss
   tasks:
   - name: Use remote server fact and create a file in local
     local_action: file path=/home/moham/ymls/{{ ansible_hostname }} state=touch
...

3) using local facts in playbook

---
 - hosts: ss
   tasks:
   - name: Use remote server local facts
     local_action: file path=/home/moham/ymls/{{ ansible_local.serverinfo.info.servertype }} state=touch
...

[root@XXXX facts.d]# pwd
/etc/ansible/facts.d

[root@XXXX facts.d]# cat serverinfo.fact
[info]
customer_environment : test
servertype : vm-esx5

The format from retriving local fact is - ansible_local.<local fact file name>.<groupname>.<keyname>

    "ansible_facts": {
        "ansible_local": {
            "serverinfo": {
                "info": {
                    "application_name": " LINUX",
                    "application_role": "EXPLORER",
                    "customer_environment": "test",
                    "datacenter": "",
                    "msp": "",
                    "servertype": "vm-esx5"


           "ipv4": {
                "address": "",
                "broadcast": "",
                "netmask": "",
                "network": ""


---
Retriving complex variable

 - hosts: ss
   tasks:
   - name: Use remote server fact and create a file in local
     local_action: file path=/home/moham607/ymls/{{ ansible_hostname }} state=touch
   - name: Use remote server local facts
     local_action: file path=/home/moham607/ymls/{{ ansible_local.serverinfo.info.servertype }} state=file
   - name: print the variable
     debug: msg="{{ ansible_eth3.ipv4.address }}"


   - name: print the variable
     debug: msg="{{ hostvars['l0202']['ansible_distribution'] }}"


Example for set fact and redirecting fact contents to a file

---
 - hosts: test
   tasks:
   - name: print the variable
     debug: msg="{{ ansible_enp0s3.ipv4.address }}"
   - name: check set_fact
     set_fact:
      IPA : "{{ ansible_enp0s3.ipv4.address }}"
   - name: create file with variable content
     local_action: copy content={{IPA}} dest=/root/ymls/setfact

Sunday, July 16, 2017

Lookup

---
- hosts: ss
  vars:
      ents: "{{ lookup('file', '/home/bhr_moham607/ymls/test') }}"
  tasks:
    - debug: msg="the value of foo.txt is {{ ents }}"
...

The name of the variable can be anything to read the file.

The CSV File Lookup

The csvfile lookup reads the contents of a file in CSV (comma-separated value) format. The lookup looks for the row where the first column matches keyname, and returns the value in the second column, unless a different column is specified.

The example below shows the contents of a CSV file named elements.csv with information about the periodic table of elements:

Symbol,Atomic Number,Atomic Mass
H,1,1.008
He,2,4.0026
Li,3,6.94
Be,4,9.012
B,5,10.81
We can use the csvfile plugin to look up the atomic number or atomic of Lithium by its symbol:

- debug: msg="The atomic number of Lithium is {{ lookup('csvfile', 'Li file=elements.csv delimiter=,') }}"
- debug: msg="The atomic mass of Lithium is {{ lookup('csvfile', 'Li file=elements.csv delimiter=, col=2') }}"
The csvfile lookup supports several arguments. The format for passing arguments is:

lookup('csvfile', 'key arg1=val1 arg2=val2 ...')
The first value in the argument is the key, which must be an entry that appears exactly once in column 0 (the first column, 0-indexed) of the table. All other arguments are optional.

Field   Default          Description
file      ansible.csv   Name of the file to load
delimiter        TAB    Delimiter used by CSV file. As a special case, tab can be specified as either TAB or t.
col       1          The column to output, indexed by 0
default           empty string  return value if the key is not in the csv file



The DNS Lookup (dig)

To use this lookup we need dnspython library. Else, we will get below error.

An unhandled exception occurred while running the lookup plugin 'dig'. Error was a <class 'ansible.errors.AnsibleError'>, original message: Can't LOOKUP(dig): module dns.resolver is not installed


The dig lookup runs queries against DNS servers to retrieve DNS records for a specific name (FQDN - fully qualified domain name). It is possible to lookup any DNS record in this manner.

There is a couple of different syntaxes that can be used to specify what record should be retrieved, and for which name. It is also possible to explicitly specify the DNS server(s) to use for lookups.

In its simplest form, the dig lookup plugin can be used to retrieve an IPv4 address (DNS A record) associated with FQDN:

If you need to obtain the AAAA record (IPv6 address), you must specify the record type explicitly. Syntax for specifying the record type is described below.
The trailing dot in most of the examples listed is purely optional, but is specified for completeness/correctness sake.

- debug: msg="The IPv4 address for example.com. is {{ lookup('dig', 'example.com.')}}"
In addition to (default) A record, it is also possible to specify a different record type that should be queried. This can be done by either passing-in additional parameter of format qtype=TYPE to the dig lookup, or by appending /TYPE to the FQDN being queried. For 

example:
- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com.', 'qtype=TXT') }}"

- debug: msg="The TXT record for gmail.com. is {{ lookup('dig', 'gmail.com./TXT') }}"

If multiple values are associated with the requested record, the results will be returned as a comma-separated list. In such cases you may want to pass option wantlist=True to the plugin, which will result in the record values being returned as a list over which you can iterate later on:

- debug: msg="One of the MX records for gmail.com. is {{ item }}"
  with_items: "{{ lookup('dig', 'gmail.com./MX', wantlist=True) }}"

In case of reverse DNS lookups (PTR records), you can also use a convenience syntax of format IP_ADDRESS/PTR. The following three lines would produce the same output:
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8/PTR') }}"
- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa./PTR') }}"

- debug: msg="Reverse DNS for 8.8.8.8 is {{ lookup('dig', '8.8.8.8.in-addr.arpa.', 'qtype=PTR') }}"




What is Facts, Custom Facts

What is Facts

Facts are information derived from speaking with your remote systems.

Turning Off Facts

If you know you don’t need any fact data about your hosts, and know everything about your systems centrally, you can turn off fact gathering. This has advantages in scaling Ansible in push mode with very large numbers of systems, mainly, or if you are using Ansible on experimental platforms. In any play, just do this:

- hosts: whatever
  gather_facts: no

Local Facts (Facts.d)

As discussed in the playbooks chapter, Ansible facts are a way of getting data about remote systems for use in playbook variables.

Usually these are discovered automatically by the setup module in Ansible. Users can also write custom facts modules, as described in the API guide. However, what if you want to have a simple way to provide system or user provided data for use in Ansible variables, without writing a fact module?


For instance, what if you want users to be able to control some aspect about how their systems are managed? “Facts.d” is one such mechanism.

In Remote Server [Client]

[root@ansic1 facts.d]# cat /etc/ansible/facts.d/hn.fact

[local_facts]
hostname=client1
environment=production
application=test1

In Master Server 

In Master Server run the setup command with filter ansible_local option, then we can see the local facts getting pulled. Like this we can use our own custom facts to manage the servers based on different environment. 

[root@ansim0 ~]# ansible test -m setup -a "filter=ansible_local"

client1 | SUCCESS => {
    "ansible_facts": {
        "ansible_local": {
            "hn": {
                "local_facts": {
                    "application": "test1",
                    "environment": "production",
                    "hostname": "client1"
                }
            },
            "one": {
                "general": {
                    "asdf": "1",
                    "bar": "2"
                }
            }
        }
    },
    "changed": false
}

Errors Observed while creating custom facts

a) Exec Format error 

Solution : Custom fact file has execute permission, but it is not actually an executable file. So remove the execute permission for custom facts file.

b) error loading fact - please check content

Solution : Custom fact file did'nt have the proper contents.