Install BackupPC on Centos 6.3

BackupPC logo

Backups are important, every hard disk, every motherboard, every piece of hardware will fail.  Remember this if you don’t have a backup system.

Step 1. Install required software

enable epel repo

yum install wget

rpm -i epel-release-6-7.noarch.rpm

yum install BackupPC

Step 2:enable Apache webserver

edit apache config file

vi /etc/httpd/conf/httpd.conf

and make apache run as backuppc user

User backuppc

edit BackupPC apache config

vi /etc/httpd/conf.d/BackupPC.conf

should be like these

<IfModule !mod_authz_core.c>
# Apache 2.2
order deny,allow
allow from all
allow from
allow from ::1
require valid-user

chkconfig httpd on

/etc/init.d/httpd start

Step 3: Configure Backuppc password

htpasswd -c /etc/BackupPC/apache.users backuppc

Step 4: Enable BackupPC Service

chkconfig backuppc on

/etc/init.d/backuppc start

Step 5: Verify installation

open a browser and navigate to


Clean backup pc install

sudo and /etc/sudoers.d directory

sudo logo

I needed to add a new user into sudoers file into several debian machines, i didn’t want open a terminal in each machine and add the line manually, the other option was append a new line into the file like echo “new line”>> /etc/sudoers .But I don’t like edit sudoers file without using visudo, I don’t feel safe.

Reading the debian documentation I found a magical directive for append external files #includedir /etc/sudoers.d, that’s mean if I add a new file  with 0440 permisions and the permissions are important will be appended into our sudo config.

Removing the hash character is a inherited custom ok don’t remove the hash character  is not a comment indicador withouth the hash character, includedir /etc/sudoers.d is a bad line and visudo show an error.

More problems installing Fedora 17

After install fedora 17 into my friend computer, My desktop gets your turn.

I follow my classic fedora installation method.

  1. Download install DVD x86_64 version iso image.
  2. Burn a blank DVD
  3. Boot my computer from the DVD

these are my normal steps but my boot DVD shoa lot of errors like:

SQUASHFS error: Unable to read page, block xxxxxxxF, size xxxF

I read some info from forums and they say that the problem was with the dvd, Because after install fedora in my desktop I was installed Hasefroch 7  and burned Fedora ISO in the same computer.

After Downloaded Sha256 for windows and check that I have the correct image, i go to bed and leave the computer checking a new recorder DVD.

When I get up I observer tat new DVD has errors too, so i decided to download a live cd iso and try with live CD, this method works and i could install fedora but i saw a small number of errors but it was thinking the live cd  too small for support everything. so i decide install my system from the live cd hopping that errors disappears at first boot.

After install a bad behavior appear in the screen and system appears to frozen in moments with unexpected X restarts. I decide to check running process using top et voila. My Phenom computer only has one processor working 🙁 . After that I remember that when change the hard disk some wires from the pow source stopped the CPU fan for a moment, 20 secs approx, a disaster has coming i burn my cpu and  need to buy a new processor, but when i reboot on haseforch  saw four cores working, that not was a CPU problem so I check the dmesg output.

[    0.006467] ACPI: Core revision 20120111
[    0.009031] ftrace: allocating 22596 entries in 89 pages
[    0.017650] ..TIMER: vector=0x30 apic1=0 pin1=2 apic2=-1 pin2=-1
[    0.028157] CPU0: AMD Phenom(tm) 9850 Quad-Core Processor stepping 03
[    0.028998] Performance Events: AMD PMU driver.
[    0.028998] ... version:                0
[    0.028998] ... bit width:              48
[    0.028998] ... generic registers:      4
[    0.028998] ... value mask:             0000ffffffffffff
[    0.028998] ... max period:             00007fffffffffff
[    0.028998] ... fixed-purpose events:   0
[    0.028998] ... event mask:             000000000000000f
[    0.028998] NMI watchdog enabled, takes one hw-pmu counter.
[    0.028998] Booting Node   0, Processors  #1
[    0.028998] smpboot cpu 1: start_ip = 93000
[    6.975648] CPU1: Not responding.
[    6.976466]  #2
[    6.976475] smpboot cpu 2: start_ip = 93000
[   14.279079] CPU2: Not responding.
[   14.281137]  #3
[   14.281146] smpboot cpu 3: start_ip = 93000
[   21.143977] CPU3: Not responding.
[   21.144993] Brought up 1 CPUs
[   21.145004] Total of 1 processors activated (5022.87 BogoMIPS).
[   21.147104] devtmpfs: initialized
[   21.147356] PM: Registering ACPI NVS region at cfcf0000 (12288 bytes)

I try changing kernel boot parameters without acpi with NOACPI and a lot of things and every time that my computer reboot my Linux only show one core.

Finally i found the guilty element it doesn’t was the CPU it was the motherboard

MA785GM-US2HI update the bios using the last version but it doesn’t work, finally I download a BIOS beta version , mb_bios_ga-ma785gm-us2h_f12a, from gigabyte web page and it works.

[    0.026770] CPU0: AMD Phenom(tm) 9850 Quad-Core Processor stepping 03
[    0.026998] Performance Events: AMD PMU driver.
[    0.026998] ... version:                0
[    0.026998] ... bit width:              48
[    0.026998] ... generic registers:      4
[    0.026998] ... value mask:             0000ffffffffffff
[    0.026998] ... max period:             00007fffffffffff
[    0.026998] ... fixed-purpose events:   0
[    0.026998] ... event mask:             000000000000000f
[    0.026998] NMI watchdog enabled, takes one hw-pmu counter.
[    0.026998] Booting Node   0, Processors  #1
[    0.026998] smpboot cpu 1: start_ip = 93000
[    0.038014] NMI watchdog enabled, takes one hw-pmu counter.
[    0.038094]  #2
[    0.038095] smpboot cpu 2: start_ip = 93000
[    0.050011] NMI watchdog enabled, takes one hw-pmu counter.
[    0.050083]  #3
[    0.050084] smpboot cpu 3: start_ip = 93000
[    0.062009] NMI watchdog enabled, takes one hw-pmu counter.
[    0.062032] Brought up 4 CPUs
[    0.062034] Total of 4 processors activated (20091.79 BogoMIPS).

Now all the cores are available. Screen doesn’t works yet and I’m downloading a new DVD copy because i don’t like liveCD install.


Nvidia Fx 5200 on Fedora 17

recently a friend lend me your desktop pc for reinstall the OS, I decided install Fedora 17 as a backup OS, if something happens with Windows XP linux will save his life.

After install and configure Windows XP, Fedora take his turn, but when the install process ended an out of range message appears into my screen. Fortunately network was working and ssh was runing :-P.

First I take my laptop and  discover the nvidia computer ip, you can check your router ip or use nmap.

ssh <nvidia_ip> -l root

vim /etc/default/grub

and append in your  GRUB_CMDLINE_LINUX  this text “rhgb quiet rdblacklist=nouveau nouveau.modeset=0″

GRUB_CMDLINE_LINUX=” rd.lvm=0 SYSFONT=True rd.luks=0 LANG=es_ES.UTF-8  KEYTABLE=es rhgb quiet rdblacklist=nouveau nouveau.modeset=0”

and finally run

grub2-mkconfig -o /boot/grub2/grub.cfg

reboot your computer

reboot && exit

now your screen is working , not fluent  but working :-P.

Installing GitLab 2.1 on Centos 6

gitlab logo

Step 1: Install needed packages

you will need remove installed ruby because repo version is 1.8.7 and we need 1.9 at least

yum remove ruby

yum install wget

enable epel repos


rpm -Uvh epel-release-6-5.noarch.rpm

install required libraries

yum install readline-devel  libyaml-devel gdbm-devel  ncurses-devel  redis openssl-devel zlib-devel gcc gcc-c++ make autoconf readline-devel curl-devel expat-devel gettext-devel  tk-devel  libxml2-devel libffi-devel libxslt-devel libicu-devel httpd httpd-devel gitolite git-all python-devel python-pip sqlite-devel sendmail vim mysql-devel

Step 2: Install Ruby 1.9.3


tar xzvf ruby-1.9.3-p0.tar.gz

cd ruby-1.9.3-p0



make install

Step 3: Install gitolite

Create gitolite-admin user

    useradd -d /home/gitolite-admin gitolite-admin

generate RSA key pair for gitolite-admin user

su gitolite-admin

Move generate public key to gitolite home dir

cp /home/gitolite-admin/.ssh/ /var/lib/gitolite/
chown gitolite:gitolite /var/lib/gitolite/

complete gitolite and gitolite-admin users pairing

su gitolite
gl-setup /var/lib/gitolite/

a editor will opened changue repo permissions to 0007

$REPO_UMASK to 0007
su gitolite-admin
git clone gitolite@localhost:gitolite-admin

in the last command you clone the gitolite-admin repo on gitolite-admin’s home. Gitolite-admin repo is gitolite config

before continue we need configure gitolite-admin git profile

git config –global “gitolite-admin@localhost”
git config –global “gitolite-admin”

add gitolite-admin to gitolite group

usermod -a -G gitolite gitolite-admin

changue gitolite-admin password

passwd gitolite-admin

Step 6 Launch Redis

chkconfig redis on

/etc/init.d/redis start

Step 7 GitLab

Clone GitLab sourcecode

cd /var/www

git clone git://

chown -R gitolite-admin:gitolite-admin gitlabhq/

cd gitlabhq/

Install phython dependencies

pip-python install pygments

Install required gems

gem install bundler

bundle install

su gitolite-admin

bundle install

Prepare config files

cp config/database.yml.example config/database.yml

cp config/gitlab.yml.example config/gitlab.yml

Prepare production enviroment

RAILS_ENV=production rake db:setup
RAILS_ENV=production rake db:seed_fu

you will get these login data


Config gitolite gitlab link

vim config/gitlab.yml

your  git_host: section must be like this

# Git Hosting congiguration
system: gitolite
admin_uri: gitolite@localhost:gitolite-admin
base_path: /var/lib/gitolite/repositories/
host: localhost
git_user: gitolite
# port: 22

fix gitolite permissions

chmod -R 770 /var/lib/gitolite/repositories/

try your installation launching webrick

bundle exec rails s -e production

open a web broser, write  yourhost:3000 and check if everything is working if you get an error send me a comment

Install passenger for Apache

gem install passenger

follow screen instructions

edit apache config file

vim /etc/httpd/conf/httpd.conf

and append these lines at the end

LoadModule passenger_module /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.11/ext/apache2/
PassengerRoot /usr/local/lib/ruby/gems/1.9.1/gems/passenger-3.0.11
PassengerRuby /usr/local/bin/ruby

enable apache service

/etc/init.d/httpd start

chkconfig httpd on


Installing OpenWRT Backfire in a fonera 2100

Sometimes pocket money isn’t enought for buy new network devices or we want to save some money changuing no existant money for personal time. In my case I needed to create a wireless bridge in my personal network, pass a ethernet wire between two builds wasn’t an option. Searching in my forgoten stuff boxes i found two foneras model 2100. Fon’s firmware isn’t powerfull enought to get wds and when these things happens Free Software is our solution. A couple of foneras Ready for flashing I’m a lucky man and I also found a usb to 3,3V serial adapter, one of my superpower is that i can brick everything. First of all we need to plug our serial adapter into fonera’s port, check the attached image for see jtag pinouts Fonera pinouts

when we have our jtag port connected we need a software for send data over jtag in my case i use GtkTerm.

in debian is simply I open a root terminal and I write

apt-get install GtkTerm

in the same term i write


I use a root terminal because I don’t want waste time configuring /dev/ttSy* permissions

next step is configure port speed 9600,8N1

in GtkTerm menu select configuration/port and fill data boxes, in my case port is /dev/ttyUSB0 if you don’t know your port a dmesg output can be helpul.

GtkTerm 9600,8N1

now is time to plug powersounce on our fonera and see output.

must be something like this in our gtkterm

Fonera booting

we need get access to redboot console  you only need un plug and plug powersource to fonera and press continously ctrl+c until you see


we need download ou firmware go to

and download these files

  • openwrt-atheros-vmlinux.lzma
  • openwrt-atheros-root.squashfs
save the files and then we need install tftp server in our machine.

search ftpd config for your distro

Next step will be load files into fonera over tftpd config
in gtkterm we need write
load -r -b %{FREEMEMLO} openwrt-atheros-vmlinux.lzma
fis init
fis create -e 0x80041000 -r 0x80041000 vmlinux.bin.l7
load -r -b %{FREEMEMLO} openwrt-atheros-root.squashfs
fis create rootfs
if your system doesn’t boot because you have installed dd-wrt before or something similar write this in redboot console:
 fconfig boot_script_data
fis load -l vmlinux.bin.l7
“press enter”

Installing PhpUnderControl in Centos 6

PhpUnderControl is a way to use CI under

Step 1: Enable EPEL repo and Remi repo

yum install wget

rpm -Uvh

rpm -Uvh

enable remi repository editing file /etc/yum.repos.d/remi.repo


Step2: Install Java and other stuff

yum install unzip

yum install ant

yum install java-1.6.0-openjdk-devel

yum install php-phpunit-PHP-CodeCoverage.noarch

yum install phpdoc.noarch

yum install php-phpunit-phpcpd.noarch

yum install php-phpunit-phploc.noarch

yum install php-phpunit-PHPUnit.noarch


rpm -i php-pear-Console-CommandLine-1.1.3-3.el5.remi.noarch.rpm

yum install php-phpunit-PHP-CodeBrowser

yum install php-phpmd-PHP-PMD.noarch

yum install php-ezc-Graph.noarch

yum install git

yum install subversion

Step3: Download CruiseControl

cd /opt



mv cruisecontrol-bin-2.8.4 cruisecontrol


cd cruisecontrol

step 4 fix problem with JAVA_HOME

open with your favorite editor (vim, emacs, nano, pico, gedit …) and set JAVA_HOME value, your file begin  must look like this:

<br />
&lt;br /&gt;#!/usr/bin/env bash&lt;br /&gt;<br />
JAVA_HOME=&quot;/usr/lib/jvm/java-1.6.0-openjdk&quot;&lt;br /&gt;<br />
 #################################################&lt;br /&gt;<br />
 # CruiseControl, a Continuous Integration Toolkit&lt;br /&gt;<br />
 # Copyright (c) 2001, ThoughtWorks, Inc.&lt;br /<br />
&gt; # 200 E. Randolph, 25th Floor&lt;br /&gt;<br />
 # Chicago, IL 60601 USA&lt;br /&gt;<br />
 # All rights reserved.&lt;br /&gt;<br />

step 5

open needed ports

step 6

Download PhpUnderControl

cd /opt


mv 0.6.1beta1


mv phpundercontrol-phpUnderControl-04197bb/ phpundercontrol

step 7

Install PhpUnderControl over cruiseControl

cd phpundercontrol

cd bin

./phpuc.php install /opt/cruisecontrol/

Install redmine in RHEL6 and RH based distributions


Step 1: Install packages needed
yum install mysql-server ruby rubygems httpd ruby-devel mysql-devel gcc-c++ curl-devel httpd-devel apr-devel apr-util-devel

Step 2: Enable services at boot-time
apache server
chkconfig httpd on

mysql server
chkconfig mysqld on

Step 3: Open needed ports

open /etc/sysconfig/iptables in your config file and add these rules

-A INPUT -m state –state NEW -m tcp -p tcp –dport 80 -j ACCEPT

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3000 -j ACCEPT

Step 4: Install ruby libs
gem install rails -v=2.3.5
gem install rack -v=1.0.1
gem install mysql
gem install -v=0.4.2 i18n
gem install passenger

Step 5: Download redmine

step 6: add redmine user

groupadd redmine
useradd -g redmine redmine
passwd redmine

step 7: decompress redmine
tar -xzvf redmine-1.1.2.tar.gz

step 8: move to destination dir
cd <dest_dir>

step 9: copy redmine
cp -R <redmine_uncompress_dir>/* ./

Step 10: create a new database and a new username in mysql

step 11: configure redmine
cd config/
mv database.yml.example database.yml
open database.yml #complete the data needed
cd ..

step 12: generate session store secret
rake generate_session_store

step 13: generate database structure
RAILS_ENV=production rake db:migrate

step 14: generate default configuration
RAILS_ENV=production rake redmine:load_default_data

step 15 Setting up permissions
mkdir tmp public/plugin_assets #in case of the dirs doesn’t exists
sudo chown -R redmine:redmine files log tmp public/plugin_assets # change redmine:redmine if you create a diferent user
sudo chmod -R 755 files log tmp public/plugin_assets

step 16 check redmine installation
ruby script/server webrick -e production
open in your browser :3000
login is admin pass is admin too

step 17 enable mod_cgi in apache
check in /etc/httpd/conf/httpd.conf if exist the line
LoadModule cgi_module modules/

step 18 create public/dispatch.cgi file
mv public/dispatch.cgi.example public/dispatch.cgi
edit the first line
#!/usr/bin/env ruby

step 19 grant execution rights
chmod 755 public/dispatch.cgi

step 20 grant apache permissions
chown -R apache:apache files log tmp vendor

step 21 set production state in file config/environment.rb
uncoment the line ENV[‘RAILS_ENV’] ||= ‘production’

step 22 configure passenger
passenger-install-apache2-module and follow instructions

if you get an error check this
In fact, just need to edit the ” /usr/lib/ruby/gems/1.8/gems/passenger-3.0.6/lib/phusion_passenger/platform_info/apache.rb” file, and replace “test_exe_outdir” with “tmpexedir”.

step 23 Enable cgi in SeLinux
setsebool -P httpd_enable_cgi 1

step 24 add virtual host in apache config file
DocumentRoot /live/redmine/public/
ErrorLog logs/redmine_error_log
Options Indexes ExecCGI FollowSymLinks
Order allow,deny
Allow from all
AllowOverride all

Step 25: close 3000 port editing /etc/sysconfig/iptables file

-A INPUT -m state –state NEW -m tcp -p tcp –dport 3000 -j ACCEPT

Step 26: reboot or restart services

Redmine Working

Laboratory III

¿ Qué comandos serían necesarios ejecutar para que un sistema Linux pudiese sustituir el encaminador R2 mostrado en el diagrama ? Asume todos aquellos datos que necesites para realizar el ejercicio (nombre de interfaces, gateway, etc)

Network Diagram
Click for larger view

Previous Steps

Enable IP Forwarding


echo 1 > /proc/sys/net/ipv4/ip_forward


vim /etc/sysctl.conf

changue the value of  net.ipv4.ip_forward = 1

sysctl -p /etc/sysctl.conf # enable  changues

Configuring network intefaces


ifconfig eth0 down

ifconfig eth0 netmask up
ifconfig eth1 down

ifconfig eth1 netmask up


on debian: edit /etc/network/interfaces like this

auto lo

iface lo inet loopback

iface eth0 inet static

iface eth1 inet static

red hat and derivates: edit /etc/sysconfig/network-scripts/ifcfg-<interface name>

Device eth0 file /etc/sysconfig/network-scripts/ifcfg-eth0


Device eth1 file /etc/sysconfig/network-scripts/ifcfg-eth1



Option 1: Using Static Routing


#from network 2 to network 3 assumed not necessary

#from network 3 to network 2 assumed not necessary

#from network 3 to network 1

ip route add via dev eth0


on Debian

edit /etc/network/interfaces

write this at after the interfaces setup

up route add via dev eth0

on Fedora

edit /etc/sysconfig/network-scripts/route-<device>


if you want to add more routes increment the numbers next to GATEWAY, for example: GATEWAY1= NETMASK1= ADDRESS1=


  • No extra processing and added resources as in the case of dynamic routing protocols
  • No extra bandwidth requirement caused by the transmission of excessive packets for the routing table update process
  • Extra security by manually admitting or rejecting routing to certain networks


  • Network Administrators need to know the complete network topology very well in order to configure routes correctly
  • Topology changes need manual adjustment to all routers something which is very time consuming

Option 2: Using NAT

Basically NAT works like static routing but changes the output ip maintaining a internal


# delete old configuration, if any
#Flush all the rules in filter and nat tables
iptables –flush
iptables –table nat –flush

# delete all chains that are not in default filter and nat table, if any
iptables –delete-chain
iptables –table nat –delete-chain

# Set up IP FORWARDing and Masquerading (NAT)
iptables –table nat –append POSTROUTING –out-interface eth0 -j MASQUERADE
iptables –append FORWARD –in-interface eth1 -j ACCEPT


store the rules into the ip tables into a rules set


same that static plus

It also benefits in a security sense as attackants can’t target a computer directly, they have to first get past the router.


  • Network Address Translation does not allow a true end-to-end connectivity that is required by some real time applications. A number of real-time applications require the creation of a logical tunnel to exchange the data packets quickly in real-time. It requires a fast and seamless connectivity devoid of any intermediaries such as a proxy server that tends to complicate and slow down the communications process.
  • NAT creates complications in the functioning of Tunneling protocols. Any communication that is routed through a Proxy server tends to be comparatively slow and prone to disruptions. Certain critical applications offer no room for such inadequacies. Examples include telemedicine and teleconferencing. Such applications find the process of network address translation as a bottleneck in the communication network creating avoidable distortions in the end-to-end connectivity.
  • NAT acts as a redundant channel in the online communication over the Internet. The twin reasons for the widespread popularity and subsequent adoption of the network address translation process were a shortage of IPv4 address space and the security concerns. Both these issues have been fully addressed in the IPv6 protocol. As the IPv6 slowly replaces the IPv4 protocol, the network address translation process will become redundant and useless while consuming the scarce network resources for providing services that will be no longer required over the IPv6 networks.

Option 3: Using RIP

Rip is a distance routing protocol, is more flexible that using static routers and necessary if the number of subnets grows. Do you want to fight against hundred of rules? or assume the risk of downtime’s created by a router malfunction?

install zebra


edit the /etc/zebra/ripd.conf file

redistribute connected

version 2

ip rip authentication string “max 16 characters”

router rip


  • Easy to configure and use
  • V2 supports VLSM and CIDR


  • Converges slowly on large networks
  • Doesn’t recognize bandwidth of links
  • Doesn’t support multiple paths for the same route
  • Routing updates can require significant bandwidth because the entire routing table is sent
  • Prone to routing loops

Option 4: Using OSPF (Open Shortest Path First)

OSPF is a routing protocol that uses the Dijkstra algorithm for get the quickest way. into a set of subnets where the routers are connected at different speeds could work better than R.I.P.

install zebra

add the necessary VTY in  /etc/services

zebrasrv        2600/tcp             # zebra service
zebra           2601/tcp              # zebra vty
ospfd           2604/tcp              # OSPFd vty
ospf6d          2606/tcp              # OSPF6d vty

edit zebra.conf file

hostname R2
password zebra
enable password z3bRa
log file /var/log/zebra/zebra.log
interface eth0
description Network 2
ip address
interface eth1
description Network 3
ip address

start zebra service

/usr/sbin/zebra –dk
/usr/sbin/ospfd –d

Telnet to port 2604 on the local machine to begin the OSPF configuration and type enable in order to get privileged mode

the next step will be announce the networks that we want  to publicity  in out networks

R2:~# telnet 0 2604
Connected to 0.
Escape character is ‘^]’.

Hello, this is zebra (version 0.84b)
Copyright 1996-2000 Kunihiro Ishiguro

User Access Verification

ospfd> enable
ospfd# configure terminal
ospfd(config)# router ospf
ospfd(config-router)# network area 0
ospfd(config-router)# passive-interface eth0

ospfd(config-router)# network area 0
ospfd(config-router)# passive-interface eth1
ospfd(config-router)# end
ospfd# write file
Configuration saved to /etc/zebra/ospfd.conf


  • Scalability – OSPF is specifically designed to operate with larger networks.
  • Full subnetting support – OSPF can fully support subnetting
  • Hello packets – OSPF uses small hello packets to verify link operation with out transferring large tables
  • TOS routing – OSPF can route packets by different criterion based on their type of service field
  • Tagged routes – Routes can be tagged with arbitrary values, easing interoperation.


  • very intensive processor
  • maintaining multiple copies of routing information, increasing the amount of memory needed
  • OSPF can be logically segmented by using areasnot as easy to learn as some other protocols
  • if an entire network is running OSPF, and one link within it is “bouncing” every few seconds, then OSPF updates would dominate the network by informing every other router every time the link changed state.