How to extend an Ubuntu Linux Logical Volume Manager disk

So, every now and then we can ran into issues where the virtual harddisk of a VM is becoming full.

Obviously these days every disk should ideally be used in a 1:1 mapping between disk and mountpoint, except for the boot image which will have the boot partition and possible a swap partition.

Having a single mounting point per full disk, will ease system administrations.

When one has expanded the virtual disk in their virtualization platform, one needs to expand the disk inside the virtual machine. This consists basically out of 4 steps.

  1. Scan for updated virtual harddisk
  2. Grow partition on virtual harddisk
  3. Resize Physical Volume in LVM
  4. Resize/extend Logical Volume in LVM

These steps are not without risks! Each of those steps, if performed incorrectly, one risks data loss. Making a backup or a snapshot of the virtual machine is highly recommended and should never be ignored before making changes with risk of data loss.

Scanning for updated virtual harddisks

echo 1 | sudo tee /sys/class/block/sda/device/rescan

Please take care of updating the bold printed part sda in the command above to reflect the actual harddisk that has been resized in the virtualization platorm. One can use the lsblk command to check for the drive indicator

for disk in {a..z}; do 
	filetoscan="/sys/class/block/sd"$disk"/device/rescan"
	if [ -f $filetoscan ]; then
		echo 1 | sudo tee $filetoscan 
	fi
done

When not sure which disks are expanded, or expanding multiple disks, the above small script would loop through all disks in the sd* chain simulating any SATA disks.

Grow partition on virtual disk

After scanning one really will need to have the drive indicator to grow the partition; Use the command below to grow the specific partition

sudo growpart /dev/sda 3

Below one can see a simulated output of this process

localuser@server:~$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0   60G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   29G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0   29G  0 lvm  /
sdb                         8:16   0   30G  0 disk
└─sdb1                      8:17   0   30G  0 part /home
sdc                         8:32   0  450G  0 disk
└─sdc1                      8:33   0  450G  0 part /home/supersizeduser
sdd                         8:48   0  100G  0 disk
sr0                        11:0    1 1024M  0 rom
localuser@server:~$ 
localuser@server:~$ 
localuser@server:~$ sudo growpart /dev/sda 3
localuser@server:~$ 
localuser@server:~$ lsblk
NAME                      MAJ:MIN RM  SIZE RO TYPE MOUNTPOINTS
sda                         8:0    0   60G  0 disk
├─sda1                      8:1    0    1M  0 part
├─sda2                      8:2    0    1G  0 part /boot
└─sda3                      8:3    0   59G  0 part
  └─ubuntu--vg-ubuntu--lv 252:0    0   29G  0 lvm  /
sdb                         8:16   0   30G  0 disk
└─sdb1                      8:17   0   30G  0 part /home
sdc                         8:32   0  450G  0 disk
└─sdc1                      8:33   0  450G  0 part /home/supersizeduser
sdd                         8:48   0  100G  0 disk
sr0                        11:0    1 1024M  0 rom
localuser@server:~$ 

Resize Physical Volume in LVM

After having resized the virtual disk it is important to resize the Physical Volume group in LVM. One can use the command pvresize and refer to the same disk and partition as has been done before.

pvresize /dev/sda3

Resize/extend Logical Volume in LVM

Last step is to resize the actual logical volume that is created in/with the phyisical volume.

For that we can use the following command to resize the standard first logical volume in an Ubuntu installation

lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv

Summary of commands

So summarized the commands would become (assuming you are acting as root and we are resizing the basic first partition and root mounting point):

lsblk
echo 1 | sudo tee /sys/class/block/sda/device/rescan
growpart /dev/sda 3
pvresize /dev/sda3
lvextend -l +100%FREE /dev/ubuntu-vg/ubuntu-lv

Ubuntu Virtual Machines on VMware and Multipathd syslog messages

Everynow and then you might install a Ubuntu or Debian machine and get multipath daemon pushed along the way.

Multipathd is used in environments where a disk may have one or more paths to it. Think of isci/fiber/SAN etc. If one path drops, it still might be available on the others.

However, in an environment with Virtual Disks this isn’t useful. And your syslog might get spammed with entries like the ones below, at least my syslog did.

multipathd[651]: sda: add missing path
multipathd[651]: sda: failed to get udev uid: Invalid argument
multipathd[651]: sda: failed to get sysfs uid: Invalid argument
multipathd[651]: sda: failed to get sgio uid: No such file or directory
multipathd[651]: sda: add missing path
multipathd[651]: sda: failed to get udev uid: Invalid argument
multipathd[651]: sda: failed to get sysfs uid: Invalid argument
multipathd[651]: sda: failed to get sgio uid: No such file or directory

There is a simple resolution to this issue.
Modify your multipath daemon config. Open the file /etc/multipath.conf in your favorite texteditor and ensure it looks like the following:

defaults {
    user_friendly_names yes
}

blacklist {
    device {
        vendor "VMware"
        product "Virtual disk"
    }
}

Now restart your multipath-tools service

/etc/init.d/multipath-tools restart

Your syslog should get less messages that you can’t resolve anyway.

Other options are to turn Multipath complete off with the commands:

systemctl disable multipathd
systemctl stop multipathd

OR you can modify your virtual machine definition in VMware (esx or Workstation) by adding the following to the .vmx file:

disk.EnableUUID = "TRUE"

How to remove Microsoft Edge from a system level in Windows 10

The quick way would be:

  • Navigate to C: Program Files (x86) Microsoft Edge Application
  • Select the current version number
  • Locate setup.exe
  • Navigate to the file path within Command Prompt
  • Execute the following command: setup.exe –uninstall –system-level –verbose-logging — force-uninstall

For the whole story read : https://www.express.co.uk/life-style/science-technology/1320416/Microsoft-Block-Windows-10-Users-Remove-Google-Chrome-Rival-Edge

Converting DIG output to JSON

DIG is one powerful tool, mostly used to troubleshoot DNS queries.

However, sometime we want to achieve a task in another field of expertise and collect dns data. For example when one needs to limit access to content which is hosted on different servers from time to time but we can’t utilize FQDN in our firewall rulebase because the reverse dns isn’t acurate.

Now with DIG we can collect the ip addresses that is returned from a DNS request. For example for google.com.

martijn@monitoring:~$ dig google.com

; <<>> DiG 9.9.5-9+deb8u17-Debian <<>> google.com
;; global options: +cmd
;; Got answer:
;; ->>HEADER<<- opcode: QUERY, status: NOERROR, id: 16736
;; flags: qr rd ra; QUERY: 1, ANSWER: 1, AUTHORITY: 0, ADDITIONAL: 1

;; OPT PSEUDOSECTION:
; EDNS: version: 0, flags:; udp: 4096
;; QUESTION SECTION:
;google.com.                    IN      A

;; ANSWER SECTION:
google.com.             36      IN      A       172.217.17.78

;; Query time: 1 msec
;; SERVER: 195.8.195.8#53(195.8.195.8)
;; WHEN: Tue Mar 10 10:38:02 CET 2020
;; MSG SIZE  rcvd: 55

martijn@monitoring:~$

But manually scraping the default output and maintaining a list is time consuming. We can make the output cleaner by adding some additional parameters. For example the following command:

martijn@monitoring:~$ dig google.com +nocomments +noquestion +noauthority +noadditional +nostats

; <<>> DiG 9.9.5-9+deb8u17-Debian <<>> google.com +nocomments +noquestion +noauthority +noadditional +nostats
;; global options: +cmd
google.com.             288     IN      A       172.217.168.238
martijn@monitoring:~$

While this is already much cleaner, we still would have to manually process this output, or perform some screen scraping to continue with the output. We can however pipe the output of dig through the powerful awk command and skip the first three lines.

martijn@monitoring:~$ dig aaaa google.com +nocomments +noquestion +noauthority +noadditional +nostats  | awk '{if (NR>3){print}}'
google.com.             53      IN      AAAA    2a00:1450:400e:80d::200e
martijn@monitoring:~$

And to be honest, yes, we could skip the first three lines with any other tool that provides these capabilities, but awk seems to be generally available. Now we only have the actual results of the query it is safe to continue with the data.

DNS data always consists of a fixed structure.

Query                  TTL      CLASS   TYPE    Content 
google.com.             53      IN      AAAA    2a00:1450:400e:80d::200e 

In my case i have a need to process this data in a structured way, and i am able to process either JSON or XML. for this example i will convert the structured data to JSON. Because the content is already by default separated by tabs we can pull the data through jq. However, we need to keep in mind that sometimes there are multiple tabs. So we need to squeeze them in to one.

martijn@monitoring:~$ dig aaaa google.com +nocomments +noquestion +noauthority +noadditional +nostats  | awk '{if (NR>3){print}}' | tr -s '\t' | jq -R 'split("\t") |{Name:.[0],TTL:.[1],Class:.[2],Type:.[3],IpAddress:.[4]}'
{
  "Name": "google.com.",
  "TTL": "76",
  "Class": "IN",
  "Type": "AAAA",
  "IpAddress": "2a00:1450:400e:80e::200e"
}
martijn@monitoring:~$

The output we have now seems to be valid JSON, however testing this further with dns queries returning multiple addresses will return slightly invalid JSON. An good example would be when we query the microsoft.com domain.

martijn@monitoring:~$ dig a microsoft.com +nocomments +noquestion +noauthority +noadditional +nostats  | awk '{if (NR>3){print}}'| tr -s '\t' |  jq -R 'split("\t")
 |{Name:.[0],TTL:.[1],Class:.[2],Type:.[3],IpAddress:.[4]}'
 {
   "Name": "microsoft.com.",
   "TTL": "3600",
   "Class": "IN",
   "Type": "A",
   "IpAddress": "104.215.148.63"
 }
 {
   "Name": "microsoft.com.",
   "TTL": "3600",
   "Class": "IN",
   "Type": "A",
   "IpAddress": "13.77.161.179"
 }
 {
   "Name": "microsoft.com.",
   "TTL": "3600",
   "Class": "IN",
   "Type": "A",
   "IpAddress": "40.76.4.15"
 }
 {
   "Name": "microsoft.com.",
   "TTL": "3600",
   "Class": "IN",
   "Type": "A",
   "IpAddress": "40.112.72.205"
 }
 {
   "Name": "microsoft.com.",
   "TTL": "3600",
   "Class": "IN",
   "Type": "A",
   "IpAddress": "40.113.200.201"
 }
 martijn@monitoring:~$

As already stated, the output isn’t yet valid JSON, we need to slurp it once more through the jq tooling.

 martijn@monitoring:~$ dig a microsoft.com +nocomments +noquestion +noauthority +noadditional +nostats  | awk '{if (NR>3){print}}'| tr -s '\t' | jq -R 'split("\t") |{Name:.[0],TTL:.[1],Class:.[2],Type:.[3],IpAddress:.[4]}' | jq --slurp .
 [
   {
     "Name": "microsoft.com.",
     "TTL": "3256",
     "Class": "IN",
     "Type": "A",
     "IpAddress": "104.215.148.63"
   },
   {
     "Name": "microsoft.com.",
     "TTL": "3256",
     "Class": "IN",
     "Type": "A",
     "IpAddress": "13.77.161.179"
   },
   {
     "Name": "microsoft.com.",
     "TTL": "3256",
     "Class": "IN",
     "Type": "A",
     "IpAddress": "40.76.4.15"
   },
   {
     "Name": "microsoft.com.",
     "TTL": "3256",
     "Class": "IN",
     "Type": "A",
     "IpAddress": "40.112.72.205"
   },
   {
     "Name": "microsoft.com.",
     "TTL": "3256",
     "Class": "IN",
     "Type": "A",
     "IpAddress": "40.113.200.201"
   }
 ]
 martijn@monitoring:~$

So, basically, to get the result of dig in an json valid output you could create one call in your bash script to

#!/bin/bash
recordtype="A"
fqdn="microsoft.com"
digjson=$( dig $recordtype $fqdn +nocomments +noquestion +noauthority +noadditional +nostats  | awk '{if (NR>3){print}}'| tr -s '\t' | jq -R 'split("\t") |{Name:.[0],TTL:.[1],Class:.[2],Type:.[3],IpAddress:.[4]}' | jq --slurp . )

Feel free to query your own domainname or specific record and adjust the recordtype, preferably by setting the variables.

Installing PECL Extensions for PHP on a DirectAdmin server

It has been a while since i installed several PECL Extensions for PHP, especially on a DirectAdmin server, so i did some quick searches and came up with the following commands:

export PHP_VER=72
export PECL_EXTENSION=yaml

cd /usr/local/src
/usr/local/php${PHP_VER}/bin/pecl channel-update pecl.php.net
/usr/local/php${PHP_VER}/bin/pecl download ${PECL_EXTENSION}
tar zxf ${PECL_EXTENSION}-*.tgz && cd ${PECL_EXTENSION}-*/
/usr/local/php${PHP_VER}/bin/phpize
./configure --with-php-config=/usr/local/php${PHP_VER}/bin/php-config

make && make install
echo "extension=${PECL_EXTENSION}.so" >> /usr/local/php${PHP_VER}/lib/php.conf.d/90-custom.ini
systemctl restart httpd && systemctl restart php-fpm${PHP_VER}

unset PHP_VER
unset PECL_EXTENSION

First of all it is important to have all requirements installed for the PECL Extensions, if not the configure or the make command will fail.

Now let’s take a look at these commands line by line.

export PHP_VER=72

This will set an environment variable with the value “72”, this matches with the versioning that DirectAdmin and Custombuild use.

export PECL_EXTENSION=yaml

Setting another environment variable, this time with the PECL Extension name.

These two variables are referenced in the following commands, just to make them reusable.

cd /usr/local/src

This one, i really hope, should not need any explanation. We are changing directories…

/usr/local/php${PHP_VER}/bin/pecl channel-update pecl.php.net

We are executing the pecl command for the desired PHP instance and update the channel data. Just to make sure we are aware of the latest versions available.

/usr/local/php${PHP_VER}/bin/pecl download ${PECL_EXTENSION}

Let’s download the extension that we mentioned earlier in the environment variable.
It will be downloaded into /usr/local/src

tar zxf ${PECL_EXTENSION}-*.tgz && cd ${PECL_EXTENSION}-*/

Now the PECL extension has been downloaded, we can unpack it. Sometimes you see references with the ‘v’ option enabled for the tar command. I don’t think we need any verbose output while unpacking.

After unpacking immediately change directory towards the just extracted PECL extension.

/usr/local/php${PHP_VER}/bin/phpize

Now let’s prepare the extension with running phpize for the files in the current directory.

./configure --with-php-config=/usr/local/php${PHP_VER}/bin/php-config

Let us run the configuration and preparation scripts. Just be carefull and watch the output.

If anything fails, you haven’t met the requirements for the PECL Extension you are installing. You will have to solve them before continuing the commands.

make && make install

So the configure went successful and you executed the make and make install commands. The PECL Extension will be compiled against your system. And afterwards installed in the right directory for your DirectAdmin server and PHP version.

echo "extension=${PECL_EXTENSION}.so" >> /usr/local/php${PHP_VER}/lib/php.conf.d/90-custom.ini

This will insert a value at the end of the 90-custom.ini file in the PHP Configuration directory. It will tell PHP to load the just combiled and installed PECL Extension.

systemctl restart httpd && systemctl restart php-fpm${PHP_VER}

This will reload both httpd as your php-fpm services.

httpd is the default service name for your Apache service on a DirectAdmin server; also the default PHP installation method is PHP-FPM so hence we are restarting that aswell.

If you are however using Nginx or any other webserver you will have to restart that by hand now. The same applies if you are running PHP in a different way than PHP-FPM.

unset PHP_VER
unset PECL_EXTENSION

Finally we unset both environment variables. No need to maintain them as we are done.

You might want to have a look at the PHPINFO() output (or php -m) for the PECL Extension to be loaded.

Pursuing the Cisco Certified Network Professional Voice certification? Update your path!

Whenever you are pursuing a Cisco Certified Network Professional certification in the Voice section, you should update your certification path. As of augusth 15th you will nolonger be able to obtain your CCNP Voice certification. Even if you still have to achieve your CCNP Voice, i would recommend you to update your certification path and obtain the brand new Cisco Certified Network Professional Collaboration certification.

The new CCNP Collaboration exam consist for a large part of the existing CCNP Voice materials, but has some updated exams on Video topics. Also it consists out of less exams to take. So on a financial base it would even save you money.

So what changed, for a beginning you are no longer required to take and pass the CVOICE exam. The CIPT1, CAPPS and TVOICE exams will give some eemptions for the new CCNP Collaboration certification. So if you already did one or more exams have a look at the CCNP Collaboration Exam Migration Tool.

Have a look at the migration scheme.

CCNP Collaboration

If you are already CCNP Voice, take the new CIPTV2 exam and upgrade your certification to CCNP Collaboration.

Een nieuw jaar, een nieuwe certificering?

We hebben nog een paar dagen te gaan in 2013 en dat houdt in dat de meeste mensen nadenken over wat ze in het nieuwe jaar willen bereiken.

Goede voornemens noemen we ze meestal, maar ook carriere paden worden regelmatig onder de loop genomen.

Zelf heb ik al een tijd geleden opgeschreven wat ik graag zou willen, maar ben daar niet geheel aan toe gekomen. Het studieboek en bijbehorende instructie dvd liggen bijvoorbeeld al 4 maanden in huis. Met de start van 2014 wil ik mij graag gaan focussen op het behalen van mijn CCNA Voice certificering. Het verlengd namelijk mijn CCNA dat ik begin 2012 heb behaald. Daarnaast komt het goed van pas in mijn werk.

Een deadline is ook gesteld, al is het maar om mijn CCNA niet te laten verlopen, eind 2014.

Ik weet dat CCNA een zware beproeving was door de breedheid van de materie. Naast CCIE examens schijnt dit een van de lastigste te zijn. CCNA Voice is daarentegen specifiek op spraak gericht. Echter dit is een oude en complexe materie waar veel verschillende implementatie mogelijkheden voor zijn. Of dit een makkelijke certificering gaat worden valt nog maar te bezien.
image

Albert Heijn pickup point, niet zo snel als verwacht…

Vandaag, 27 juli, vieren we met familie en vrienden dat ik er weer een jaar bij heb mogen tellen.

Met de komst van onze kleine leek het verstandig om de spullen via het pick-up point op te halen. Zodoende staan we nu bij het pick-up point, maar niet al onze spullen zijn er. Nu maar hopen dat ze er zo zijn.

Na circa 30 minuten gewacht te hebben, kwam daar de bestelwagen van Albert aan. Ze kwamen de ontbrekende gekoelde spullen brengen, zodat samen met mij 5 partijen zeer gelukkig hun boodschappen in ontvangst konden nemen.

Als de gekoelde boodschappen voorhanden waren geweest, was het winkelen via Appie stukken sneller geweest, jammer dat dit fout is gelopen. Volgende keer meer geluk?

Het genot van thuiswerken

Iedereen heeft er wel eens van gehoord. Mensen die thuiswerken of werken conform ‘het nieuwe werken’.

Niet voor iedereen weg gelegd en vereist een zekere discipline. Zeker als je van uit huis werkt.

Tussen twee locatie bezoeken vandaag had ik de mogelijkheid om in de tuin mijn email bij te werken.

image

Ook weleens een keer lekker!

I’m with stupid -> Spammer met niet werkende linkjes…

Soms vraag je jezelf af, waarom al die bergen spam worden verstuurd.

Menig partij investeert in een goede oplossing om al die onzin buiten de deur te houden. Echter eens in de zoveel tijd schiet een spam-email wel eens door het filter heen.

Zo ook vandaag, een email dat ik Adobe CS4 zou hebben aangekocht.

Maar als je dan echt mensen naar je malware wilt trekken. Zorg dan dat het linkje werkt… 😉