I’ve come across an odd problem today on a server that’s been working fine for all kinds of FTP traffic for many years. Turns out that today, FileZilla started complaining about explicit TLS connections (when available) and gave the following error message:
425 MLSD unable to build data connection: operation not permitted
Clients could still connect, but no directory content was displayed, nor was uploading new files possible. Rats, I thought. This was on a CentOS 6 server with Plesk 12 running without a hitch otherwise.
Turns out that by default, ProFTP is configured to re-use TLS sessions – but it appears that this behaviour freaks out FileZilla, which in turn doesn’t like it and throws an error instead. This did not affect plain (non-secure) sessions.
Thankfully, Adam Stohl knows the answer to this problem: tell ProFTP not to re-use TLS sessions. Open /etc/proftp.conf and add the following line to the bottom of the file:
The ProFTP service in Plesk is part of xinetd, so for those changes to take effect, simply restart it with this:
service xinetd restart
And voila, TLS connections can happen again. Thanks, Adam – you’re a life saver!
Plesk uses ProFTP as the default FTP server. It has a handy feature that allows file uploads to resume or append should a connection be broken during transmission. This means that partially transferred data doesn’t have to be uploaded again, it can simply be added to – potentially saving a lot of time.
Although easy to activate, this feature is not enabled by default on Plesk installations for security reasons. Here’s how to make it happen:
Edit /etc/proftpd.conf and add the following few lines:
# allow resuming file uploads
You may find the AllowOverwrite directive in there already, in which case replace it with the above block. For the changes to take effect, restart the xinetd service (of which proFTP is part):
service xinetd restart
Works on both CentOS 6 and CentOS 7.
Note that for this to work, it also needs to be enabled in your FTP client. In FileZilla it’s under Settings – Transfers – File Exists Action:
It’s easy to establish an FTP connection using the ftp command from the Linux Command Line. Sadly this command does not accept login credentials as parameters – which means that if we use it in a script, our script will pause and wait for us to type those credentials in manually. Not really suitable for automated backups.
Thanks to a clever mechanism called netrc we can create a file in the home directory of the user who runs the script and provide credentials there. Let me show you how this works.
First we create a file called .netrc. It’s a hidden file and it needs to reside in the home directory of the user who will connect via FTP. I’m going to use root for this:
The first line is just a comment to you can remember how to add parameters here. The second line is an example of a host you want to connect to. Add as many other servers as you like, all following the same pattern.
Be aware that you need to connect to the server as it is specified in the .netrc file. In the above example, if you would connect to domain.com instead, you would be asked for credentials as netrc cannot find a match.
The .netrc file needs to be readable only by this one user, otherwise connections may fail. We do this by changing the file permissions to 600:
That should do it! Try to connect with
and the connection will be established without the prompt for credentials.
If netrc isn’t working for you, or you choose not to use it, note that you can also provide FTP credentials with a here script. I find that approach a bit clunky, but the following link has details on how to do that:
I fixed a problem this morning which wouldn’t let the latest version of FileZilla v18.104.22.168 connect to one of my client’s servers anymore.
This had not been a problem in the past.
The connection itself worked, but FileZilla failed due to a problem with the TLS Certificate. Here’s the error:
Error: ReceivedTLSalert from theserver: Handshakefailed(40)
Error: Couldnot connect to server
Turns out that FileZilla have made a few changes and deprecated the insecure RC4 algorithm in FTP over TLS. Since ProFTP didn’t know the path to the server certificates, TLS failed and hence no connection was possible.
# Authenticate clients that want to use FTP over TLS?
# Allow SSL/TLS renegotiations when the client requests them, but
# do not force the renegotations. Some clients do not support
# SSL/TLS renegotiations; when mod_tls forces a renegotiation, these
# clients will close the data connection, or there will be a timeout
# on an idle data connection.
In this example the Server Certificate section contains the default path to Plesk’s certificates, but feel free to substitute them if yours are stored elsewhere.
There’s no need to restart xinetd because ProFTP creates a new process for every new connection, which will then include the new configuration. NOw FileZilla can connect without a hitch, only displaying the new Server Certificate the first time it is encountered:
Note that this issue no longer occurs with newer installations of Plesk. This particular instance of Plesk has seen many updates since version 10.4, hence the tweak was necessary.
Passive FTP ports are not open by default when you install Plesk. To make it happen we need to patch the ProFTP configuration with a range of ports (anything between 49152 and 65534) and open the same range in our firewall.
You’ll find the ProFTP config file in /etc/proftpd.conf. There’s no need to open the whole available range, I’ll settle for 99 possible ports here. Add the following somewhere at the top of the file, outside any global declarations:
# adding passive ports and public IP address
For the changes to become effective we’ll need to restart the xinetd service which ProFTP is part of in Plesk:
service xinetd restart
This will allow passive connections – but you also need to open those in your firewall. The easiest way to do this is via the Firewall Extension in Plesk:
Select Modify Firewall Rules, then Add Custom Rule. Give it a title, then add your port rage and click OK. Your changes are not effective yet because Plesk needs to restart the firewall service. To do this hit “Apply Changes”, followed by “Activate”. Wait a moment and Plesk will have taken care of it.
If you don’t want to use the extension, here’s how you can open those ports manually. On CentOS 6 you can manually add that port range on the command line like this:
You can use the ftp command to talk to an FTP server from the Linux Command Line. Type ftp to see if the tool is installed. If you get a “command not found” message then go ahead and type yum install ftp to make it available on your system.
Using it is very straightforward – but I keep forgetting how because I only do it once in a blue moon. So here’s a handy cheat sheet:
Logging in to your FTP Server
Assuming our site is example.com, simply type this:
This will connect you, but the system wants to know the username and password at the prompt. Provide those and if your login was successful you’ll see something like this:
230Usertester logged in
Remotesystem type is UNIX.
Usingbinary mode to transfer files.
Note that you’re now at the FTP command line and no longer on the Linux command line (you can tell by the ftp> in front of the cursor). Therefore only FTP commands are now accepted, until you type “exit” or “bye” to go back to Linux.
To see a list of available commands type help and you’ll see a list much like this:
Commandsmay be abbreviated.Commandsare:
!debug mdir sendport site
$dir mget put size
account disconnect mkdir pwd status
append exit mls quit struct
ascii form mode quote system
bell get modtime recv sunique
binary glob mput reget tenex
bye hash newer rstatus tick
casehelp nmap rhelp trace
cd idle nlist rename type
cdup image ntrans reset user
chmod lcd open restart umask
close ls prompt rmdir verbose
cr macdef passive runique?
delete mdelete proxy send
No need to panic: The good news is that we don’t really use a plethora of new commands, and some (like ls and mkdir) are working the same way, just the output may look a bit different.
Let’s go through a few common scenarios now: listing and creating directories, uploading, downloading, and deleting files. Classic CRUD – FTP Style.
If you ever need to come out of a running command, CMD-D (or CTRL-D) will do the trick.
Listing and Switching Directories
Your usual Linux favourites will work fine to list and switch directories:
– ls (list directory, same as dir)
– cd (change into directory, for example “cd mydir”)
– cd .. (move one directory up in the tree)
Excellent: nothing new to learn here. Result!
Creating and Deleting Directories
Another nice thing is that mkdir is still working to create a directory. Here’s how we create a directory called test:
Likewise, rmdir does a good job at deleting (empty) directories:
To delete a directory that contains files you must first remove all files (see below under Deleting Files) and then use this command.
To download a single file we can use the get command (or recv if you can remember it better). You must type out the entire file name for this to work, and you won’t get a progress report while your file downloads:
local: testfile.tarremote: testfile.tar
150OpeningBINARYmode data connection for testfile.tar(86365356bytes)
86365356bytes received in11secs(7865.17Kbytes/sec)
This will save testfile.tar in the Linux directory that you were before you initiated the FTP session.
To save files in a directory other than the current one, I’m afraid you’re going to have to log out, cd into the directory you want those files to go, then re-connect. I know, ultra lame – but if there is another way then it’s kept so secret that no Google search will ever unveil it.
Sadly wildcards are no working in this operation, so you’ll always have to type out the exact file name. Lucky for us you CAN use wildcards to download multiple files with mget, like this:
Now all files starting with “test” are downloaded and you’ll be prompted one by one. This will work for single files too and saves you having to type out cryptic long names. Human 1 – FTP 0. Ha!
put and mput work just like get, but they upload local files to the current FTP directory. You can specify a local Linux path when doing this, but put and mput expect a local path to also exist on the FTP remote (and fail if they don’t). Read: messy. There probably is a way to deal with this, but life’s just too short.
Just like get, put also needs the whole file name and cannot deal with wildcards – but mput does:
150OpeningBINARYmode data connection for testfile.tar
236716bytes sent in0.0141secs(16825.36Kbytes/sec)
There’s also a delete and mdelete command which – you guessed it – removes unwanted files from the server. Same as before: no wildcards on delete, but they work fine on mdelete:
FTP transfers all files and passwords “in the clear” and does not work with encryption. Checkout the sftp command which will do all of this and more while using encryption on all transfers.
Note that there is a difference between SFTP and FTPS: the latter (FTPS) is the same as FTP but with encryption added to it. SFTP isn’t really FTP at all, it’s an SSH connection that works much like rsync and scp, and uses similar syntax.
Passive FTP connections should work out of the box in Plesk. If no other firewall or NAT is interfering with it.
I’ve recently noticed that when I install Plesk on Amazon EC2 every passive FTP connection fails with an error such as “Server sent passive reply with unroutable address. Passive mode failed.”
The reason for this mishap is twofold:
EC2 instances are behind a NAT, and therefore have an internal (unroutable) IP, and an external (public) IP. When a passive connection request comes in, ProFTP – Plesk’s default FTP Server – tells the connecting client its internal private IP address, and in turn quite rightly fails to connect to it.
On top of that, we need to make sure to open a range of ports we want to use for passive FTP connections and tell ProFTP only to use those.