In my previous post, I explained how to upload pictures from your iPhone straight to you Home Server but it turns out it requires some extra configuration on the vsftpd side. Pixelpipe is using passive ftp connections and I hadn't explained how to setup vsftpd to accept such connections. This feature is handy for all sorts of other reasons (e.g. most browser only support passive mode as well) so here is how you go about and enable Passive FTP mode in vsftp.
Step 1: Configure vsftpd
Open /etc/vsftpd.conf and add the following lines:
The first line is pretty self-explanatory (oh alright then, it enables passive mode). The second and third line are the ports that you allow vsftpd to use for the passive data connection. Basically, in passive mode, ftp assigns a random port > 1023 for handling the data connection. However, I want to limit this "randomness" to a narrow range as I need to open these ports up (see next steps). You can choose any number here, I settled for a 100 port range.
In the last line you need to set your WAN IP address (so not your server's LAN IP address, but the routers WAN IP address). Now I am aware that you might have a dynamic IP address that you get assigned by your ISP, but we can solve that with a little shell script, for now, just hard code it in the .conf file.
Step 2: Restart vsftpd server
> sudo /etc/init.d/vsftpd restart
Step 3: Open your data ports in your router and firewall
Make sure the following ports are forewarded to your server:20, 21 and the PASV range you specified in your vsftpd.conf file (60000 - 60100 in my case)
Fire up your favorite FTP client and you should now be able to connect in passive mode.
This setup is needed to get pixelpipe to upload your pictures to your FTP server.
What if you have a Dynamic IP address?
If you are on a dynamic IP address, you need to change the pasv_address every time you get a new IP address assigned. Alternatively, you can create a DNS name for your IP address (at dyndns.org or some other DynDNS service)and use a script to change your .conf file automatically (create a cronjob). Here is an example script:
#!/bin/shNote that you need to run the crontab as sudo (so first do a sudo -i and then crontab -e and add your script, make it run e.g. every 15 minutes).
#change to your domain name in next line
my_ip=`host your_host.dyndns.org | cut -f4 -d" "`
vsftpd_ip=`grep pasv_address $vsftpd_conf | cut -f2 -d=`
if [ "$my_ip" != "$vsftpd_ip" ] ; then
( echo ",s/$vsftpd_ip/$my_ip/g" && echo w ) | ed - $vsftpd_conf
I recently purchased an iPhone and I have been looking for a solution to get the pictures off the phone and on to my Linux Home Server. I concocted the following solution:
Step 1: Install vsftpd
From http://vsftpd.beasts.org: "vsftpd is a GPL licensed FTP server for UNIX systems, including Linux. It is secure and extremely fast. It is stable." Well, that did it for me, so I went ahead and installed it (using Synaptic Package Manager)
Step 2: Configure vsftpd
By default, vsftpd allows anonymous users to logon to your system (I think they can't actually do anything, not sure though) and I didn't want that so open up /etc/vsftpd.conf
> sudo gedit vsftpd.conf
and make the following changes
and restart the vsftpd server
Now only your Linux users can login and they can only access their home directories and write files.
Try it out with your favorite ftp client first to see if it works.
Now for the iPhone part.
Step 1: Download and install the pixelpipe iPhone application
This application lets you upload your pictures to all sorts of online photo websites (picasa, flickr etc) BUT it also lets you upload to an FTP server and we just happen to have one of those laying around.
Step 2: Create an account at pixelpipe
You need to setup a pixelpipe account that goes with your iPhone app and there you can add your FTP server as a "pipe"
Step 3: Take pictures and upload
You can take pictures as normal OR you can take them from within the pixelpipe application and have them uploaded straight to your server.
Can't wait for the official release of Firefox 3.5 for Linux? Install it now with the following command:
wget -O - http://releases.mozilla.org/pub/mozilla.org/firefox/releases/3.5/linux-i686/en-US/firefox-3.5.tar.bz2 | tar xj -C ~
It won't replace your default browser (till it is added to the repositories) so you need to launch it manually:
I am assuming you already have Apache 2.2 installed
Step 1: Install Webmin
You can do everything in this blog post manually, but I found webmin extremely useful. It provides a WebGUI for all kinds of services running on your Linux machine, including apache which is what we are going to use it for.
Step 2: Make sure that auth_digest Apache Module is enables
Go to webmin (https://localhost:10000/ if you installed it with the defaults), open the Servers menu on the left hand side, then click on Apache WebServer and then the Global Configuration tab. In there you will find a Configure Apache Modules icon, click on that. In the list of modules that appears, make sure that auth_digest is enabled (tick the box if not). When done click on "Enable Selected Modules"
Step 3: Add a port
I want to use a special port for my Virtual Host so we need to add this to the WebServer config file so it listens to this new port. You can do this in webmin, just go to Servers -> Apache WebServer and click on the Global configuration tab. There you can click on the Networking and Addresses icon. Add your IP address and the port you want your Apache WebServer to listen to the list and hit Save.
Step 4: Create some users
Your Apache Server comes with an application called htdigest (mine was in /usr/bin/htdigest) that you can use to create users. The syntax is:
sudo htdigest -c
> sudo htdigest -c /etc/apache/pwds "By Invitation Only" mark
This will prompt you twice for the password of the user. If you want to add another user, use the same command but without the -c, so
> sudo htdigest -c /etc/apache/pwds "By Invitation Only" foobar
The realm is sort of a grouping mechanism, it will be used later.
This should have created a file pwds in /etc/apache and if you open this file you will see the users, their realms and their (encrypted version of) password.
Step 5: Create a Virtual Server
Go back to webmin, Servers -> Apache WebServer and click on the Create virtual host tab. I specified my ip address and the port I specified earlier (step 3), the document root where all the webfiles are that this Virtual Host is pointing to and I also gave a server name (although I am not sure where this is used), I left the rest as default.
Step 6: Create Directives
In order for the users we created to get access to our Virtual Server, we need to change the directive of the virtual server. You can do this in webmin, go to Servers -> Apache WebServer and click on the Existing virtual hosts tab. From the list, select the virtual server you created. From the page that comes up, select Edit Directives and add the following in the
AuthName "By Invitation Only"
# Optional line:
You can see we use the realm here in the AuthName label.
And that's it. Go to your website and you will get asked for a user and password or otherwise you cannot get in.
If you, like me, owned a Netgear WNR854T router for more than 6 months, you are probably looking at a front panel with just the green power light on, desperately trying to connect to it. As a last resort you hit the interwebs to look for a fix which is when you realize that you probably should have read the reviews on amazon.com, buy.com or any other retailer. Yes my friend, you bought a piece of junk, your router is no more! It has ceased to be! It's expired and gone to meet its maker! This, is a late router! It's a stiff! Bereft of life, it rests in peace! It's rung down the curtain and joined the choir invisible! "THIS IS AN EX ROUTER!"
Mine failed after 12 months and 20 days which is, as the friendly Netgear Customer Support Representative reminded me, exactly 20 days past the warranty period. I filed a complaint with the FTC Bureau of Consumer Protection but apparently is is perfectly legal in this fine country to sell any POS as long as it gets replaced if it fails in the warranty period, to which I replied "With the same POS?", yep, that's all perfectly fine, long live capitalism. I consulted a few lawyers (no shortage of those here) and they confirmed that this is acceptable business practice, probably not good for business in the long run, but business none the less. I followed this all up with a few F-bom riddled e-mails to Netgear but they wouldn't budge and I gave up. I was planning on taking the sledge hammer to it, Office Space style, but for some reason I never did, maybe I thought I would set a bad example for my kids or maybe I felt sorry for the router, who knows.
It has been more than a year now, my replacement router, a D-Link DIR-655 has been humming along without any hick ups whatsoever, it is much faster and more configurable, a fine piece of machinery. But I still cannot believe I have a bricked router in the house, I never got over it and haven't had the hart to toss it out. So in one last attempt before composting it, I decided to open the thing up and see if I could find what was wrong with it.
The first thing I did when I found it was to plug it in, maybe hopping that time would have healed it, but it hadn't. I found myself staring at that green light again, like I had over a year ago, first transfixed but very quickly the sledge hammer urge came over me again like a green haze at which point a recomposed myself and started focusing on the job at hand. I couldn't immediately find any screws, maybe I would have to user a hammer after all. When I turned the router upside down, a pair of stickers drew my attention and made me chuckle:
I presumed you have to rip these off, maybe there were screws under these stickers, so I did. No screws, but it became clear that the side panels on the router are held together by those plastic hooks you can see at the bottom and the sticker will get damaged if you remove the panels, hence voiding your warranty. Put your router flat on its side so that you can read the labels at the back above the ports (i.e. make sure it is not upside down), like so:
The side panel that is now on top is the panel you need to remove first (you'll see why later). Take a sharp tool (I used scissors but a small flat head screwdriver will do to) and start pushing in the (5) plastic hooks of the top side panel one by one while slowly lifting the side panel. When you push in the last one, the panel should pop right off. There are a few internal plastic hooks along the other sides, again you can use a long tool to pry those loose while you pull, I just pulled and it came off without damaging any of those other hooks. You should be looking at something like this now:
You can clearly see the wireless card with 3 cables coming out of it. Those are the antennas and you see they run to the other panel into 3 tubes. This is why you cannot remove the other side panel first, it's kinda attached to the wireless card. It seems that the wireless card is attached to a IDE port and can be removed. As I do not need wireless on this router, I decided to start with that and see what would happen if I remove it. Start by disconnecting the 3 wires from the card, just pull them off, its a snap-on connection. You can now safely pull the other side panel off.
Next you need to remove the 2 white plastic pins in the corners opposite the IDE port that pin the card to the motherboard. I found it easiest to start from the underside of the motherboard, just squeeze them and push them through the holes, the wirless card will bend but it is very flexible. Then bend the card even further and push the pins from the other side (on the wireless card) all the way through. Alternatively, you can just clip the pins, we won't need them anymore anyway.
Now for the scary bit, removing the card from the IDE port. Unfortuntately it seems that it is soldered onto the motherboard. I took my chances and just ripped it loose (finally some revenge), pulling hard away from the port. The plastic sides broke off, but no real damage was done, the card came out clean, as did the motherboard. Here is the result, router with spare parts:
I thought after this major surgery I would give my router another try and I plugged the power in (be careful not to touch any components when you do this OR better, put the panels back before you try this). It had been so long since I last used it that I did't remember the boot sequence anymore. As a result, my first reaction was disappointment, that all to familiar green light was staring back at me. But then, it suddenly started flickering and turned orange and then I remembered that this is how it boots. Hurray, it's alive, ALIVE! I plugged in my internet connection and poof, the internet light started flickering.
So there you have it, by removing your wireless card you can revive your WNR854T router from the dreaded Green Ring of Death (GROD). You will loose the wireless capabilities, but at least it's not bricked.
In the next article I will explain how to use this router as a second router in your LAN.
As promised, in this post I will detail how to install NX (No Machine) on your Ubuntu server and a NX client on your windows PC.
Open your favorite browser on your server, point it here and download the client, node and server. The client is needed because it ships libraries used by the node. The node is needed because it ships tools needed by the server. Make sure you install them in that order. Your web browser should automatically ask you if you want to install the files. If it doesn't, save the files, open a terminal window and issue the following commands:
$ sudo dpkg -i nxclient_3.3.0-3_i386.deb
$ sudo dpkg -i nxnode_3.3.0-3_i386.deb
$ sudo dpkg -i nxserver_3.3.0-8_i386.deb
Next you need to get the Windows Client and install it. Once installed, proceed by opening the client and it will present you with the Connection Wizard. Provide a name for your session (can be anything) and enter your server's IP address in the Host field. Leave the port set to 22. You can also select an Internet connection Type, since I am using it over my LAN I set it to LAN.
The next screen is where you set the Desktop you want to use in your Linux session that will get initiated by NX. I use the same as my server (Unix/GNOME), but you can pick anything you want, regardless of what you use normally on your server. You also set the screen size in this window, set it to available area to have your session run in a full screen window. And finally you can add a shortcut on your desktop so that you can launch this session directly just by double clicking the icon on your desktop.
If you want to send sound from your server to the NX client, you need to configure both the client and the server a bit more. Open the NX Client for Windows tool, select your just configured session and hit Configure. Go to the Services Tab and tick the box next to Multimedia Support.
On the server, you need to play sounds using the Enlightened Sound Daemon (ESD). Most Gnome applications use the gstreamer subsystem to play sounds so you will need to configure gstreamer to use esd as the output.
System -> Preferences -> More Preferences -> Multimedia System Selector
-> Audio -> Default Output Plugin -> Output = ESD
KDE Control Center -> Sound & Multimedia -> Sound System -> Hardware ->
Select the audio device = Enlightened Sound Daemon
I could not get this to work so I used VLC and the VLC ESD plugin and that works perfect.
I made a reference to No Machine in a previous post so I thought I let you know how to install it and why, starting with the latter.
I have already described one method of connecting remotely to your server (using VNC), No Machine offers an alternative. It has a few distinct advantages over VNC which is why I actually prefer it.
The most notable feature is that NX is much faster than VNC. It responds much faster to mouse movements and keyboard entries. This in itself is all the reasons I need to use NX, however there is more.
When you connect to your server using NX, you do not have to connect to an existing session. By default, it will start a completely new session and you can even choose whether you want to use GNOME or KDE for that session, regardless of what you have running on your server already. Unlike with VNC, you do not take over the session on the server and if somebody should be sitting at that server, he can continue to work independent of you, in fact he wouldn't even know you were connected. You cannot do that with VNC.
Another nice feature is that because you have a new session, you do not have to use the same screen resolution as the server. For me, this comes in very handy as I have a crappy monitor connected to my server with a very low resolution but the computer I use to remotely control my server has a very nice, large screen. When I use VNC, I get a very small window that shows me the same desktop as on the server's screen, when I connect with VNC, it will use the whole screen of my desktop giving me much more real estate to work with.
Finally, you can pipe sounds from the server to your remote machine with NX, again that doesn't work with VNC. And since I have no speakers attached to my server, but I obviously do to my desktop, this feature gives a voice to my server that I actually never had before.
It is however not Open Source although you can download a free version that allows 2 users to connect at the same time no matter what their location is, and share the desktop, which suits me just fine.
In my next post I will explain how to install NX and configure it.
Today we are going to look back at one of the commands I mentioned in an earlier post; the sudo command.
Each Linux system (including Ubuntu) comes installed with a special user, a superuser, called root. It is the equivalent of the "Administrator" account in Windows and is used for the same purpose, to administer the system. To be able to do this, the root superuser has access to everything and can do anything on your system, including destroying it. You can see that the root account should not be used by just anybody and you should keep the password of it a close secret. In fact, root can be so dangerous that in Ubuntu the account is disabled by default, in other words you cannot even use it (you could re-enable it but as you will see, there really is no reason to do this).
So how do you, the non-superuser perform administrative tasks, i.e. how do you run commands that require root level privileges? You use the sudo command. sudo stands for super user do and allows authorized users to run commands as root without using the actual root account. So even though you are not root, you can pretend to be root. You do this by simply prepending the command you need to run with sudo, e.g. as I showed in an earlier post:
$ sudo apt-get install firefox
The apt-get command requires root privileges so in order to be able to run it as me, I have to prepend it with sudo. You will notice that when you do this, you will be asked to provide a password. This is YOUR USER password, not the root account's password (remember, root is disabled), i.e. the same password you provide when you enter Ubuntu (if you didn't already enable the automatic login option as discussed in Configure Ubuntu For Remote Access, Part II: Wake On LAN (WOL)).
Anything you need to do as administrator of an Ubuntu system can be done via sudo which is why there is no reason to ever enable the root account. However, this begs the question, what is the difference between using the root account directly and using sudo. There are a few subtle but very important ones actually. Fist of all, if you log in as root and leave your terminal, anybody sitting at it after you leave could do anything they want to your machine. If you use sudo they would have to provide YOUR (obviously super secure) password before they could do anything. Also, the user would not have to remember another (the root account's) password. It also will prompt YOU every time you try to do something with root privileges making you think twice before you do something you really didn't want to do. Next, you can restrict which user can do sudo and which can not (see below). And finally, each and every command run with sudo will get logged into a file (/var/log/auth.log) which you can always read to verify who did what and maybe even reverse what they did (and shouldn't have done).
To allow a user to us the sudo command, open the Users and Groups tool from System->Administration menu.
You will notice that the forms is mostly disabled because, guess what, changing User and Groups' properties requires root privileges. Just as in a terminal window, you will need to perform the equivalent of sudo on the User Settings window. You do this by clicking on the Unlock button and provide your user password. You will now see that all settings are enabled. Select the user you want to enable sudo for from the list, then click on on properties. Choose the User Privileges tab. In the tab, find "Administer the system" and check that.
An there you have it, the magical sudo command. You better get used to it because you will need to use it all the time.
Today, Ubuntu released its latest version: Ubuntu 8.10 (Intrepid Ibex). You can download it from the ubuntu website which also has Upgrade instructions. I will install it myself over the w/e and report back on how it went.