$ dd if=/dev/zero of=/dev/sdc bs=1M count=1 $ mkudffs -b 512 –media-type=hd /dev/sdc
$ mount –bind /dev /dev $ mount –bind /proc /proc $ mount –bind /sys /sys
$ SHELL=/bin/bash chroot <ponto_de_montagem> OR $ exec bash $ chroot /mnt/ /bin/bash
set root=(hd0,0) linux /boot/vmlinuz26 root=/dev/sda1 ro initrd /boot/kernel26.img boot
$ netstat -netlp
$ netstat -netlp | grep ‘9000’ $ lsof -i tcp:9000 -n
$ qemu-system-x86_64 -hda /dev/sdx $ qemu-system-x86_64 -cdrom filename.iso
$ wc -l [nome_arquivo]
$ cat query_demorada_ids.txt | sed ’s/,/\n/g’ > query_demorada_formatted.txt
aspell –encoding UTF-8 -l pt_BR dump master > /tmp/brasil.txt (we must have the pt_BR dict for it to work)
$ cat /etc/issue OR $ cat /etc/os-release
$ dig [nome_do_dominio] (if not resolving, will return blank)
After raising “eth0” interface, ou equivalent: #multi_ips.sh #!/bin/bash ifconfig eth0:0 165.1.200.237 netmask 255.255.254.0 ifconfig eth0:1 165.1.200.238 netmask 255.255.254.0 ifconfig eth0:2 165.1.200.239 netmask 255.255.254.0 ifconfig eth0:3 165.1.200.240 netmask 255.255.254.0 ifconfig eth0:4 165.1.200.241 netmask 255.255.254.0 ifconfig eth0:5 165.1.200.242 netmask 255.255.254.0 ifconfig eth0:6 165.1.200.243 netmask 255.255.254.0 ifconfig eth0:7 165.1.200.244 netmask 255.255.254.0 ifconfig eth0:8 165.1.200.245 netmask 255.255.254.0 ifconfig eth0:9 165.1.200.246 netmask 255.255.254.0 ifconfig eth0:10 165.1.200.247 netmask 255.255.254.0 ifconfig eth0:11 165.1.200.248 netmask 255.255.254.0 ifconfig eth0:12 165.1.200.249 netmask 255.255.254.0 ifconfig eth0:13 165.1.200.250 netmask 255.255.254.0 ifconfig eth0:14 165.1.200.251 netmask 255.255.254.0 ifconfig eth0:15 165.1.200.252 netmask 255.255.254.0 ifconfig eth0:16 165.1.200.253 netmask 255.255.254.0 ifconfig eth0:17 165.1.200.254 netmask 255.255.254.0 ifconfig eth0:18 165.1.200.132 netmask 255.255.254.0 IMPORTANT: put the IPs from eth0:0 to the last in an IP’s range different from the IP on the main eth0. To confirm, after setting the IPs, run: $ ifconfig -a I could put this script on my /etc/rc.local, e.g.. As an alternative way: edit the ips configuration file: $ vim /etc/network/interfaces:
auto lo iface lo inet loopback
auto eth0 iface eth0 inet dhcp
auto eth0:0 iface eth0:0 inet static address 192.168.0.1 netmask 255.255.255.0 broadcast 192.168.0.255 auto eth0:1 iface eth0:1 inet static address 192.168.0.2 netmask 255.255.255.0 broadcast 192.168.0.255 auto eth0:2 iface eth0:2 inet static address 192.168.0.3 netmask 255.255.255.0 broadcast 192.168.0.255
/* RSYNC PARAMETERS: r: recursive c: uses checksum to get if a file changed h: human readable output z: compress data during transfer P: continues (resumes) from where it previously stopped (partial) v: verbose i: detailed information on what has been done with the file progress: shows tranfer progress iconv: origin and destination character set encoding. E.g.: latin1,utf8 log_file: the log file, with all operations made delete: deletes files to keep origin and destination synchronized bwlimit: maximum bandwidth limit to use, in kbps IMPORTANT: –dry-run : simulates the transfer, but don’t touch the filesystem (useful to predict what will be done, as a DEBUG mode) / rsync -rchzPvi –bwlimit=300 –progress –iconv=latin1,utf8 –log-file=log.txt –delete projetos_tiagoprn/ projetos_tiagoprn_ALL /* Below, ignores folders and file with .svn and .git on their names. / rsync -rchzPvi –bwlimit=300 –progress –iconv=latin1,utf8 –log-file=sync.txt –delete –exclude ‘.svn’ –exclude ‘.git*’ /origin/* /destiny/folder /* Below, makes rsync to a remote server, through ssh, on a server that listens to a ssh connection on a non-standard port. / rsync -rchzPvi –bwlimit=100 –progress –iconv=latin1,utf8 –log-file=/tmp/rsync.txt –delete –exclude ‘~’ –exclude ‘.BAK’ /origem/ -e “ssh -p 22000” user@remoteserver.net:/destination_folder RSYNC AS A DAEMON: http://kezhong.wordpress.com/2010/12/01/rsync-backup-in-daemon-mode/ RSYNC WITH EXCLUDE FILE: EXCLUDE_FILE /dev/* /proc/* /sys/* /media/* /mnt/* /run/* /tmp/* rsync -avc –exclude-from=EXCLUDE_FILE / /mnt RSYNC OVER SSH ON NON-DEFAULT PORT: rsync -rchzPvi –progress yum.repos.d/ -e “ssh -p9999” root@192.168.0.154:/tmp
$ grep -rn ‘string_to_find’ . | grep -v ‘cache’ | grep -v ‘.svn’ Neste exemplo, ele vai procurar ‘string_to_find’ recursivamente na pasta mostrando o número das linhas onde a string foi encontrada, mas vai ignorar os resultados onde ele encontrar as strings “cache” e “.svn”.
$ wget -k -r -p http://www.site.com The -r option recurses through the site’s links starting at http://www.site.com/index.html. The -k option rewrites the downloaded files so that links from page to page are all relative, allowing you to navigate correctly through the downloaded pages. The -p option downloads all extra content on the page, such as images. This way, you can get a mirror of a site on your desktop. wget also handles proxies, cookies and HTTP authentication, along with many other conditions.
Wget is a super-useful utility to download pages and automate all types of web related tasks. It works for HTTP as well as FTP URLs.
To get wget to use a proxy, you must set up an environment variable before using wget. Type this at the command prompt / console:
$ export http_proxy=“http://proxy.example.com:8080”
Replace proxy.example.com with your actual proxy server.
Replace 8080 with your actual proxy server port.
You can similarly use ftp_proxy to proxy ftp requests. E.g.:
$ export ftp_proxy=“http://proxy.example.com:8080”
Then you should specify the following option in wget command line to turn the proxy behavior on:
proxy=on
Alternatively you can use the following to turn it off:
proxy=off
You can use proxy-username="user name"
and proxy-passwd="password"
to set proxy user name and password where required.
Replace user name with your proxy server user name and password with your proxy server password. Another alternative is to specify them in http_proxy / ftp_proxy environment variable as follows:
$ export http_proxy=“http://username:password@proxy.example.com:8080”
$ gpg -r B47B5D91 -se encriptar.txt
$ gpg -d encriptado.gpg > encriptado.txt (cheatsheet: http://irtfweb.ifa.hawaii.edu/~lockhart/gpg/)
$ crontab -e 59 23 * * * /home/tiago/script.sh > /home/tiago/script.log 2>&1 Important:
0 5 * * * /home/user/my_command.sh
$ useradd -m -d /home/newuser -g users -G wheel,admin,docker newuser , where: -d: the user home directory -g: the user default group -G: other groups, separated with a comma “newuser”: name (login) of the user You can change the user’s password like that: $ passwd newuser
chsh
, specifying the shell’s full path:
Ex.:
$ chsh -s /bin/bashTo get 2Gb of space for files in RAM, edit /etc/fstab to add the following line: tmpfs /var/ramspace tmpfs defaults,size=2048M 0 0 After that, /var/ramspace is now the place to store your files in memory.
$ ncdu
$ ssh-copy-id -i ~/.ssh/id_rsa.pub -p [remote_ssh_port] remoteuser@192.168.0.200
$ ssh -i my_file.pem user@remotehost
$ id Sample output: uid=1000(tiago) gid=1000(tiago) grupos=1000(tiago),0(root),10(wheel),100(users),108(vboxusers),142(docker)
$ sshfs -o IdentityFile=~/.ssh/id_rsa myuser@REMOTE_IP:/home/tparanhos/shared /remotes/REMOTE_NAME
$ sudo umount /remotes/REMOTE_NAME
To monitor HTTP traffic including request and response headers and ssage body from a particular source: $ tcpdump -i eth0 -n -vvv -s 0 ‘src 172.16.1.208 and tcp port 80 and (((ip[2:2] - ((ip[0]&0xf)«2)) - ((tcp[12]&0xf0)»2)) != 0)’ -w /tmp/tcpdump.txt OBS.: I can exchange “src” for “dst” to get requests with a given DESTINATION.
That can be used for simulating 100 simultaneous curls to a given URL to test how a server performs under a high number of requisitions: $ parallel -j 1000 ./test.sh ::: {1..1000} (where “test.sh” is a script with one or more “curl” commands)
$ yum install siege $ siege -b -c50 -d1 -t60S -i -f urls.txt WHERE: -c: number of concurrent users. -d: random interval between 0 and NUM that each “user” will sleep for between requests -t: time to run the benchmark. E.g.: -t60S (60 seconds) -t1H (1 hour) -t120M (120 minutes) -f: a file containing the URLs to be reached. -i: randomly select the URLs from the file. -D: Debug mode -g: is used for testing siege, for it to behave like “curl”. E.g.: $ siege -H ‘Accept: application/json’ -H ‘Content-Type: application/json’ -g ‘http://localhost:5001/normalize POST {“sku”: {“sku”: “260”}}’ NOTE:
$ httperf –server hostname
–port 80 –uri /test.html
–rate 150 –num-conn 27000
–num-call 1 –timeout 5
This command causes httperf to use the web server
on the host with IP name hostname, running at port 80.
The web page being retrieved is “/test. html” and,
in this simple test, the same page is retrieved repeatedly.
The rate at which requests are issued is 150 per second.
The test involves initiating a total of 27,000 TCP
connections and on each connection one HTTP call is
performed (a call consists of sending a request and receiving
a reply). The timeout option selects the number
of seconds that the client is willing to wait to hear back
from the server. If this timeout expires, the tool considers
the corresponding call to have failed. Note that
with a total of 27,000 connections and a rate of 150 per
second, the total test duration will be approximately 180
seconds, independent of what load the server can actually
sustain.
Once a test finishes, several statistics are printed.
An example output of httperf is:
Total: connections 27000 requests 26701 replies 26701 test-duration 179.996 s
Connection rate: 150.0 conn/s (6.7 ms/conn, <=47 concurrent connections)
Connection time [ms]: min i.I avg 5.0 max 315.0 median 2.5 stddev 13.0
Connection time [ms]: connect 0.3
Request rate: 148.3 req/s (6.7 ms/req)
Request size [B]: 72.0
Reply rate [replies/s]: min 139.8 avg 148.3 max 150.3 stddev 2.7 (36 samples)
Reply time [ms]: response 4.6 transfer 0.0
Reply size [B]: header 222.0 content 1024.0 footer 0.0 (total 1246.0)
Reply status: ixx=0 2xx=26701 3xx=0 4xx=0 5xx=0
CPU time [s]: user 55.31 system 124.41 (user 30.7% system 69.1% total 99.8%)
Net I/O: 190.9 KB/s (1.6"10^6 bps)
Errors: total 299 client-timo 299 socket-timo 0 connrefused 0 connreset 0
Errors: fd-unavail 0 addrunavail 0 ftab-full 0 other 0
(reference: http://www.hpl.hp.com/research/linux/httperf/wisp98/httperf.pdf)
To the same machine used as “gateway”: $ ssh -f tparanhos@54.165.155.207 -L8888:54.165.155.207:80 -N (gateway: 54.165.155.207, target: 54.165.155.207) , where: -f = (optional parameter) go into the background just before it executes the command tparanhos@54.165.155.207 = username@server (you must have an account on it for the ssh connection to establish, so you then can forward the port.) -L8888:54.165.155.207:80: establishes the tunnel. The syntax is: -Llocal-port:host:remote-port (NOTE: the “host” can be ANY host, assuming that it is acessible through “username@server”.) -N: (optional parameter) do not execute a command on the remote system. This will forward the local port 8888 to port 80 on 54.165.155.207, with the nice benefit of everything to be encrypted through ssh. Note:
NAT: 54.165.155.207 RABBITMQ: 172.16.1.232:15672 , where to reach RABBITMQ, I just can do so connecting first with NAT through my ssh credentials (instead of NAT I could use another machine, it just would have to be on the same network as RABBITMQ). This is how I setup the tunnel: $ ssh -f tparanhos@54.165.155.207 -L15672:172.16.1.232:15672 -N , which gives us the formula: $ ssh -f user@NAT -L[LOCALHOST_PORT]:[RABBITMQ_HOST]:[RABBITMQ_PORT] -N Finishing that, I can access RABBITMQ directly, since it will be mapped to localhost:15672. And all the traffic will be encrypted through ssh. HOW TO MAKE A REVERSE SSH TUNNEL: http://xmodulo.com/access-linux-server-behind-nat-reverse-ssh-tunnel.html
$ mail -a /opt/backup.sql -s “Backup File” user@example.com,user2@example.com < /dev/null , where: -a is used for attachments -s is used for defining subject for email.
$ cd /home/www $ find . -type f -print0 | xargs -0 sed -i ’s/subdomainA.example.com/subdomainB.example.com/g’
$ hostnamectl set-hostname mt-extracoes-business_utils To check it really changed: $ hostnamectl status
$ vim /etc/hostname (type the hostname in the file). $ hostname -F /etc/hostname
You can use it to get a specific “columna” of a command’s output on terminal. Combined with grep and cat, can be a simple way to filter command outputs on terminal. E.g: To get the first column of “docker ps”: $ docker ps | awk ‘{print $1}’ To get more that one column: $ ps aux | grep worker.py | awk ‘{print $2, $9, $10, $12, $13, $14, $15, $16, $17, $18}’ awk splits columns by spaces. So, if on one of the lines I have and extra space, it could result on a mess with the columns. In that case, it could be useful to use “cut” as an alternative.
E.g..: Concatenate the first 2 columns, separating with a “:”: $ docker images | grep scrapy precifica/scrapy latest 79fba72e40c7 23 minutes ago 317.9 MB precifica/scrapy 20160504.0929.59 4f7853f49d3d 29 minutes ago 317.9 MB precifica/scrapy 20160504.0914.17 2ffa69250fab 44 minutes ago 320.4 MB precifica/precifica-scrapy-freelas 20151026.1739.24 ac3624be9d6e 6 months ago 793 MB $ docker images | grep scrapy | awk ‘{v=$1”:"$2; print v}’ precifica/scrapy:latest precifica/scrapy:20160504.0929.59 precifica/scrapy:20160504.0914.17 precifica/precifica-scrapy-freelas:20151026.1739.24
$ date +%Y%m%d-%H%M%S-%N
Toggle terminal full screen: [Ctrl][Shift]x
Host backup_server
HostName 10.0.6.123
Port 29020
From now on, you can ssh, scp and rsync to this server refering to it as e.g. tiago@backup_server
.
$ vim ~/.ssh/config Host * TCPKeepAlive yes ServerAliveInterval 60 ServerAliveCountMax 120 #Default GitHub (“workspace” folder) Host github.com HostName github.com User git IdentityFile ~/.ssh/id_rsa
Host github-personal HostName github.com User git IdentityFile ~/.ssh/gh_personal_projects
Host bitbucket HostName bitbucket.com User git IdentityFile ~/.ssh/my_bb_key That will send a packet for the server (TCPKeepAlive) at each 60 seconds (ServerAliveInterval), to keep the connection active and avoid the broken pipe. It will allow a total of 2 hours (ServerAliveCountMax: 120 times * 60 seconds) of inactivity in the session. Also, it will automatically change the ssh key required to the hosts above (including 2 different keys depending on the repository on github). NOTE: To generate a new ssh key: $ ssh-keygen -t rsa -b 4096 -C “your_email@example.com” (it will ask you for the location of the new key, which you may change if you want another one)
$ ssh-keygen -lf id_rsa 2048 SHA256:DkHk1i5a6Q5yBlK5REiCoeqNUrqgCc2/eGB4QSS5MBZk tiago@imaginary (RSA)
$ ssh-keygen -E md5 -lf id_rsa 2048 MD5:c0:a3:a0:7a:60:0a:60:64:b1:4e:f7:40:da:7c:86:ef tiago@jeitto (RSA)
$ echo -n user:pass | openssl enc -a
Just use the “-v” parameter. E.g.: $ curl -v ‘http://localhost:8888/version’ Output:
GET /version HTTP/1.1 Host: localhost:8888 User-Agent: curl/7.44.0 Accept: /
< HTTP/1.1 200 OK < Content-Type: application/json; charset=UTF-8 < Etag: “383be678ad2bafadce4482ecf7dc3c2f87203982” < Server: TornadoServer/4.3 < Date: Sat, 28 Nov 2015 02:37:36 GMT < Content-Length: 48 <
nohup: turn any command “immune” to termination signals. It is useful, for instance, when you don’t have screen or tmux installed on a server but you must make sure that the command runs even if you logout of your session. You can also run in in conjunction with “&” at the end, to make any command run in background.
$ yum install sendmail $ yum enable sendmail $ yum start sendmail $ echo “This is the email contents.” | mail -s “That is the subject” projects@tiagoprnl.me You can use that, for instance, to send e-mails from cronjob execution automatically. For that, add below to the first line of your crontab: MAILTO=projects@tiagoprnl.me
$ vim ~/.bashrc (on Arch Linux) $ vim ~/.bash_profile (on other distros?) Then, $ source ~/.bashrc ( or ~/.bash_profile)
$ chmod –reference=haproxy.cfg.ORIG haproxy.cfg $ chmod –reference=haproxy.cfg.ORIG haproxy.cfg Above, will copy the same owner/permissions from haproxy.cfg.ORIG into haproxy.cfg.
Automatically restart SSH sessions and tunnels. Can be called at boot time - creating a systemd init file to automatically establish a tunnel. This can be useful, e.g., to access your home connection from DigitalOcean or another cloud provider (using a Reverse tunnel). IMPORTANT: Authentication through keys without password must be enabled for it to work.
$ autossh -M 10984 -o “ExitOnForwardFailure=yes” -o “PubkeyAuthentication=yes” -o “PasswordAuthentication=no” -o “ServerAliveInterval 60” -o “ServerAliveCountMax 3” -L 22131:localhost:22131 tiago@159.203.160.142
$ autossh -M 10984 -o “ExitOnForwardFailure=yes” -o “PubkeyAuthentication=yes” -o “PasswordAuthentication=no” -o “ServerAliveInterval 60” -o “ServerAliveCountMax 3” -R 22131:localhost:22131 tiago@159.203.160.142 (more at: http://www.debianadmin.com/howto-use-ssh-local-and-remote-port-forwarding.html)
help:
@echo -e “\nHELP:\n”
@echo -e " make clean \n\tDeletes the *.pyc files\n”
@echo -e " make crawl-normal SPIDERNAME=amazon \n\tRun scrapy (normal speed).\n”
@echo -e " make crawl-slow SPIDERNAME=amazon \n\tRun scrapy (slow speed to avoid blocking).\n"
@echo -e " make config-devel \n\tUpdate the config files to run the app on this environment.\n"
@echo -e " make config-homol \n\tUpdate the config files to run the app on this environment.\n"
@echo -e " make config-prod \n\tUpdate the config files to run the app on this environment.\n"
@echo -e " make config-freelas \n\tUpdate the config files to run the app on this environment.\n"
@echo -e " make exec \n\t Enter the container.\n"
clean:
@echo -e “\nDeleting pyc files…\n”;
rm -fr find . -name "*.pyc"
;
@echo -e “\nDeleting pyc files…[DONE]\n”;
crawl-normal:
@printf “Crawling $(SPIDERNAME) (normal) …[WAIT]\n”
@printf “Crawling $(SPIDERNAME) (normal) …[DONE]\n”
crawl-slow:
@printf “Crawling $(SPIDERNAME) (slow) …[WAIT]\n”
@printf “Crawling $(SPIDERNAME) (slow) …[DONE]\n”
config-devel:
echo ‘Updating configuration…[WAIT]’
cp -farv scrapy_project/spiders/settings.py.development scrapy_project/spiders/settings.py
echo ‘Updating configuration…[DONE]’
config-homol:
echo ‘Updating configuration…[WAIT]’
cp -farv scrapy_project/spiders/settings.py.homologation scrapy_project/spiders/settings.py
echo ‘Updating configuration…[DONE]’
config-prod:
echo ‘Updating configuration…[WAIT]’
cp -farv scrapy_project/spiders/settings.py.production scrapy_project/spiders/settings.py
echo ‘Updating configuration…[DONE]’
config-freelas:
echo ‘Updating configuration…[WAIT]’
cp -farv scrapy_project/spiders/settings.py.freelas scrapy_project/spiders/settings.py
echo ‘Updating configuration…[DONE]’
exec:
docker exec -it $$(docker ps | grep scrapy | awk ‘{print $$1}’) bash
file:
.EXPORT_ALL_VARIABLES:
SHELL=/bin/bash
ENV_FILE=../notepy.env
SET_VARIABLES=set -a && source ${ENV_FILE} && set +a &&
help:
@echo 'run: run the app'
@echo 'shell: get a bpython shell into the django app, auto-importing its models (django shell_plus)'
@echo 'notebook: get a jupyter notebook shell into the django app, auto-importing its models (django shell_plus)'
run: migrate
bash -c "${SET_VARIABLES} python manage.py runserver"
shell:
bash -c "${SET_VARIABLES} python manage.py shell_plus --ipython"
notebook:
@echo 'IPython Notebooks can be updated (while running) to reflect changes in a Django application’s code with the menu command Kernel > Restart.'
@echo 'HAVE FUN!'
bash -c "${SET_VARIABLES} python manage.py shell_plus --notebook"
$ sudo systemctl restart nscd
$ vim ~/.bashrc export PATH=$HOME/local/bin:$PATH , then: $ source ~/.bashrc
IMPORTANT: For this to work, you must ‘pip install pygments’. The first pipe will pretty print the json, the second one will colorize it $ curl http://localhost:5000/stats | python -m json.tool | pygmentize -l json
$ cat extra.urls.amostra | cut -d'?' -f 1 | uniq | sort > extra.urls.amostra.UNIQUES
E.g.:
Input file:
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Assinatura-Anual-Revista-Espresso--4-edicoes--6423952.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Pacote-Digital-do-Globo-12-meses-5142105.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Pacote-Digital-do-Globo-3-meses-5140925.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Pacote-Digital-do-Globo-6-meses-5142104.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Bolsa-Sacola-Santino-Brasil---Cms13004u02-1017141-5649159.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Bolsa-Sacola-Santino-Brasil---Cms13004u02-1017141-5649159.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Boneca-Polly-Pocket---Crissy-de-Pijama-com-Panda---Mattel-5426733.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Boneca-Polly-Pocket---Crissy-de-Pijama-com-Panda---Mattel-5426733.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Caixa-Acustica-Line-Array-Vertical-Passiva-8-X-2--150W-Cbt50La1-Preta---Jbl---Preto-6303575.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Caixa-Acustica-Line-Array-Vertical-Passiva-8-X-2--150W-Cbt50La1-Preta---Jbl---Preto-6303575.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Camera-Speed-Dome-HDCVI-Intelbras-VHD-3020-SD-HD-720p-20X-Zoom-PTZ-5225309.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Camera-Speed-Dome-HDCVI-Intelbras-VHD-3020-SD-HD-720p-20X-Zoom-PTZ-5225309.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Carga-De-Fumaca-Com-5-Litros-Dragon-Morango-Laserled-5413812.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Carga-De-Fumaca-Com-5-Litros-Dragon-Morango-Laserled-5413812.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Fernando-Bonassi--Um-Escritor-Multiplo-6326406.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Fernando-Bonassi--Um-Escritor-Multiplo-6326406.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Lirica-5670655.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Lirica-5670655.html?recsource=busca-int&rectype=busca-2701
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Livro---Como-ter-coragem-serenidade-e-confian--a-como-ter-coragem-serenidade-e-confian--a-5709242.html?recsource=busca-int&rectype=busca-2673
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Livro---Como-ter-coragem-serenidade-e-confian--a-como-ter-coragem-serenidade-e-confian--a-5709242.html?recsource=busca-int&rectype=busca-2701
Output file:
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Assinatura-Anual-Revista-Espresso--4-edicoes--6423952.html
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Pacote-Digital-do-Globo-12-meses-5142105.html
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Pacote-Digital-do-Globo-3-meses-5140925.html
http://www.extra.com.br/AlamedadeServicos/AssinaturaDigital/Pacote-Digital-do-Globo-6-meses-5142104.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Bolsa-Sacola-Santino-Brasil---Cms13004u02-1017141-5649159.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Boneca-Polly-Pocket---Crissy-de-Pijama-com-Panda---Mattel-5426733.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Caixa-Acustica-Line-Array-Vertical-Passiva-8-X-2--150W-Cbt50La1-Preta---Jbl---Preto-6303575.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Camera-Speed-Dome-HDCVI-Intelbras-VHD-3020-SD-HD-720p-20X-Zoom-PTZ-5225309.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Carga-De-Fumaca-Com-5-Litros-Dragon-Morango-Laserled-5413812.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Fernando-Bonassi--Um-Escritor-Multiplo-6326406.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Lirica-5670655.html
http://www.extra.com.br/AlamedadeServicos/CartaoPresente/Livro---Como-ter-coragem-serenidade-e-confian--a-como-ter-coragem-serenidade-e-confian--a-5709242.html
# stat -c '%A %a %n' /etc/sudoers /etc/ssh/sshd_config
-r--r----- 440 /etc/sudoers
-rw------- 600 /etc/ssh/sshd_config
e[x]ecute...: 1
[w]rite.....: 2
[r]ead......: 4