One of my readers had a mysterious problem: the WordPress Editor was not showing up under Appearance or Plugins. It’s a handy tool for quick edits to any plugin or theme file, and I’ve relied on it more times than I can count.
Having it enabled is a double-edged sword of course, because with great power comes great responsibility too: make a change to a plugin file and accidentally remove a semicolon from the end of a line, and your WordPress site will go down – and the best minds will have a hard time tracking the problem down.
There is a way to remove the editor functionality completely from WordPress to save tinkerers from themselves: add the following line to the wp-config.php file:
This will remove the Editor from both Appearance and Plugins. The change will be in effect as soon as you save the file and refresh the admin interface.
To bring the editor back, simply remove the entire line from wp-config.php, or set the value “true” to “false”.
Hidden files start with a . on UNIX like systems and OS X is one of them. While we can show hidden files in a Ternimal session by using something like ls -a, it’s not so easy to convince the Finder to show such files.
If ever you need to see them, execute the following from the command line:
Now relaunch Finder ALT-right-clicking the Finder icon in the dock. Choose Relaunch.
Next time you open a Finder window – either on its own or via an app – you’ll see all kinds of files you didn’t even know existed. They all begin with a dot and are slightly lighter in colour. Others are system folders, such as Library.
So many new files can make your file navigation a little cluttered – which is why it’s good to know how to switch this feature off again. Same command as above, but this time we’ll say no:
While I dislike change for the sake of change, I believe that it makes a lot of sense in this case. I have been working with Parallels products since 2008, and when I started out I always thought there was a dissociation between the consumer products, such as Parallels Access and Parallels Desktop, and the professional products, such as Plesk.
The Odin branding will be used for the latter line of products, while the Parallels branding will continue to be used for Parallels Desktop & Co. Parallels Plesk will simply be known as “Plesk”.
The company itself will remain a single unit for now, simply operating under two brands.
In case you’re wondering what will become of all those Parallels Summits, they will be renamed to Odin Summits. The first one with this branding will be in May: http://www.odin.com/summit/2015/
I passionately *H*A*T*E* the startup chime that my Mac makes when I switch it on. At least on my MacBook, if the volume is turned down before I shutdown, the system restarts silently. I guess it’s somehow linked to the internal speakers.
Sadly on my Mac Mini this approach doesn’t work: due to the lack of “real” internal speakers , the Mini always wakes up with that horrible eighties K-DONNNNNNNNG noise, waking up my wife and large parts of the neighbourhood.
But there’s good news: thanks to the nvram command we can set a firmware value to suppress this sound. Here’s how:
sudo nvram SystemAudioVolume=%80
This will write a value of 128 (or 80 in hex) to the BIOS. Make sure to shutdown your system and then power back on to “hear” the effect on a Mac Mini: simply restarting it will not suppress the sound, but a full shutdown and restart will do the trick from now on. Result!
As much as I dislike the sound, it is there for a reason: it signals the successful completion of a quick self test. I appreciate this – so I may not want to switch K-DONNNNNNNNG off forever.
It’s easy to remove that value again from the BIOS, using the -d parameter of the same command:
There. Now the horror chime is enabled again, ready to annoy more neighbours at 3am.
There’s a built-in command line tool in every Mac called caffeinate that prevents your computer from going to sleep, even when the lid is closed. This is the default behaviour if an external monitor is attached, but if that’s not the case, MacBooks just go to sleep as soon as you close the lid.
While several GUI tools are available (such as InsomniaX, or the Nosleep Extension, you can also call caffeinate from the command line without installing anything.
Open the Terminal app (under Applications – Utilities), and simply type
The cursor will disappear and your Mac won’t go to sleep. To terminate the behaviour, simply press CTRL+C – just like you would to stop any other shell command.
You can stop the command and close the Terminal session as soon as your lid is closed (and stays closed). If you open and close your lid again, your Mac will get sleepy again.
The command has a lot to offer, for example you could ask the hard disks from not sleeping using caffeinate -m, or prevent the display from going blank with caffeinate -d.
You can also specify a timeout using
This specifies the time in seconds you would like caffeinate to stay active (after 3600 seconds, or one hour, your Mac will sleep again).
Checkout man caffeinated from the command line for more options.
It’s never good if your server is working fine, but the domains that resolve to it are down for one reason or another. This has happened to me TWICE this year already, and both times it was out of my hands (yes @ENOM, I’m looking at you).
Many of my clients use websites for data storage, and while it’s not nice when one goes down, it’s even worse if you can’t access information you may have saved as part of a web application. Thankfully there is a way to access Plesk websites even if the domain no longer resolves properly.
Let me show you how in this article.
1.) Accessing Plesk without a domain
First let’s gain access to our Plesk server via it’s numeric IP instead of a domain name. Let’s assume that you’ve had access via https://domain.com:8443 before, but domain.com is currently down due to a DNS resolve issue.
In that case, find out your numeric IP and access your Plesk server with https://184.108.40.206:8843 – replacing 220.127.116.11 with the IP of your server. If you can’t remember, login to your domain host’s control panel and find out what it is.
2.) Preparing an external domain that’s still working
We need a domain that still works and is not affected by the DNS outage. It doesn’t have to point to the Plesk server whose domains you want to access, but you need access to the DNS records. Perhaps your domain resolves via CloudFlare or DNSMadeEasy, or even your domain registrar’s control panel.
Let’s call this domain working.com. We must create an A record that looks like this:
Replace your own IP for your Plesk server here, replacing the dots in your numeric IP with dashes. Don’t forget the asterisk in the front so all requests can be redirected properly.
You will also have to supply the IP to your Plesk server with the A record. Don’t worry, this change will not impact on the other services hosted with this domain, we’ll simply make an addition.
One last note: you want this to kick in as soon as possible, so set your TTL to something like 60 rather than 4000. TTL describes the “time to live” in seconds – and we want this emergency preview in place sooner rather than later.
3.) Setting up Plesk with an external Preview Domain
In Plesk, head over to Tools and Settings – General Settings – Website Preview Settings. If you can’t see Tools and Settings, look for the Server Tab.
Define an external preview domain here, like this:
This is designed for customers who want to see their websites before a domain has switched to this server. We’re borrowing this functionality in these troubled times.
Plesk lets you choose a domain from the drop down menu, but assuming none of them are working at this point, our tweaked external domain should work just fine.
Now head over to the customer control panel for a domain that is not resolving on this server. In our example it’s domain.com. Under Websites and Domains, find the Preview option:
Clicking this will open a new browser tab which will attempt to display your website on a URL much like this one:
If all works well you should see your website, all the while bypassing the broken domain, with full PHP scripting capabilities. You can also access subfolders by simply appending them to the URL like this:
Caveat: Subdomains, Permalinks and Redirects
This isn’t a perfect solution, and several things won’t work with this approach. Webmail for one thing, or anything that is accessed as a subdomain (like webmail.domain.com).
Another thing that won’t work is permalinks: all Apache mod_rewrite rules will attempt to turn the URL back into its original state, and this means requests may be redirected to the broken domain.
In addition, web applications like WordPress are usually aware of where they live and you may have to teach them their new (temporary) home URL.
Here’s how you can fix a WordPress site. It will allow you to write new and access existing posts until the DNS problem has been fixed.
Try to login using /wp-login.php instead of /wp-admin. Then head over to Settings – General and tweak the two values for WordPress Address and Site Address by changing them to the temporary Plesk Preview URL (see above).
Before you do, make a note of what these values were before you hit Save. You’ll have to change them back when your real domain resolves again:
Next, head over to Settings – Permalinks and simply click the save button. This will update the .htaccess files so that all mod_rewrites can be redirected to the correct temporary URL.
As soon as the DNS panic is over, change these two URL values back to their original and once again click save under Permalinks.
It’s easy to establish an FTP connection using the ftp command from the Linux Command Line. Sadly this command does not accept login credentials as parameters – which means that if we use it in a script, our script will pause and wait for us to type those credentials in manually. Not really suitable for automated backups.
Thanks to a clever mechanism called netrc we can create a file in the home directory of the user who runs the script and provide credentials there. Let me show you how this works.
First we create a file called .netrc. It’s a hidden file and it needs to reside in the home directory of the user who will connect via FTP. I’m going to use root for this:
The first line is just a comment to you can remember how to add parameters here. The second line is an example of a host you want to connect to. Add as many other servers as you like, all following the same pattern.
Be aware that you need to connect to the server as it is specified in the .netrc file. In the above example, if you would connect to domain.com instead, you would be asked for credentials as netrc cannot find a match.
The .netrc file needs to be readable only by this one user, otherwise connections may fail. We do this by changing the file permissions to 600:
That should do it! Try to connect with
and the connection will be established without the prompt for credentials.
If netrc isn’t working for you, or you choose not to use it, note that you can also provide FTP credentials with a here script. I find that approach a bit clunky, but the following link has details on how to do that:
It’s not quite as easy to get up and running with Jekyll as the Quick Start Guide makes it sound. But it’s not super difficult either – if explained from one human to another.
Here’s how I got Jekyll working on a vanilla CentOS 7 instance.
Installing some necessary packages
Before we can install Jekyll using Rubygems, we need a few packages which aren’t with us by default. One comes from the Fedora EPEL repository, so let’s enable that first:
yum install epel-release
I love how easy this has become since CentOS 7! Next, some packages. We’ll need Ruby and the developer extensions. We also need a web server, so I’ll choose Apache – but I understand that others work just as well.
What this boils down to are the following packages:
yum install ruby ruby-devel nodejs httpd
Great! Now that Ruby is working, let’s install Jekyll via Rubygems:
gem install jekyll
This could take a moment, be patient. When it’s finished, let’s put Jekyll to work.
Creating and previewing a test site
Let’s call our new test site “test”. I’m assuming you’re in your home directory from which usually no web files shall ever be served – but thanks to Jekyll we can make that happen for local testing purposes. The following command will create a brand new site in the current directory:
jekyll new test
Newjekyll site installed in/root/test.
Excellent! But how do we get to see it? Well Jekyll has created a directory called “test” for us. Let’s enter it and preview the site:
Serverrunning…press ctrl-cto stop.
What Jekyll is trying to say is that IF we are building this on a system with a desktop environment, we could open a web browser and enter http://localhost:4000 now to preview the site. Jekyll spawns a web server on port 4000 so that we can see any changes we’ve made since last time without affecting a live site.
Sadly this approach doesn’t work on remote servers though, at least not for me: port 4000 on a remote server simply did not respond. Regardless, on the local sever it was working just fine, and we should see a picture like this:
That’s a good start – let’s put the site live so that our web server can show it to the world!
Building the site
Press CTRL-C to stop the preview serving procedure and ask Jekyll to create this site in our default web directory. In CentOS that’s /var/www/html. Mine is empty so I’ll create the site there using
Auto-regeneration: disabled.Use–watch to enable.
Nicely done! Now let’s surf to http://localhost (without the 4000 at the end), and the local machine should see this content served up fresh just like any other website. Remote computers should use either the IP address or a domain resolving to it (say http://18.104.22.168).
Now that you know how it works, enjoy using Jekyll!
I was racking my brains over how to mount an SD card formatted with anything other than FAT32 on my Android device. Jelly Bean and Kit Kat automatically mount FAT32 partitions, but they seem to ignore native Linux file systems – which Android clearly understands.
Apparently there’s a $1.54 app on the Play Store that can auto-mount cards (called EzyMount), but there is a way to do this for free: namely the old fashioned manual way.
All we need is root access to the device and a Terminal Emulator (available from the Play Store).
In this example I’m using /sdcard as my mount point, but you can of course mount that card anywhere you like. On my Nook Tablet, the standard mount point for the SD card is /storage/sdcard1, but this differs from device to device of course.
To mount the card, we’ll use the mount command:
We need to specify the partition format with -t (for example, ext4, ext3, ext2), followed by the partition we want to mount, followed by the mount point. On my Nook Tablet for example, the SD card device itself is /dev/block/mmcblk1, and the first partition in it is mmcblk1p1.
To figure out what a device is called can use
Run this command before and after inserting your storage device. Watch what changes: the added device will be your SD card or USB stick.
Why do you need EXT4? Why not just use FAT32?
I need my files to be larger than the 4GB maximum file size imposed by FAT32. I’m using Anton Skshidlevsky’s amazing Linux Deploy to install fully fledged versions of various Linux distributions on my device. These don’t replace Android, but instead run side by side with Android in a chrooted environment.
Linux Deploy creates ISO images on the SD card which represent a full Linux installation (for example, Debian, Fedora, Ubuntu – anything that runs on your architecture). To be able to use the full size of an SD card, and to make the self-contained file system access more data, those ISO files can be as large as the SD card – even larger: Linux Deploy uses self-expanding ISO images, but you can only define partition sizes larger than 4GB if you have an EXT type file system on the card.