A co-worker and I were bored last week and wondered about the possibility of taking a large number of old USB thumb drives of various sizes and connecting them in a RAID setup. I'm not incredibly familiar with the different RAID options, but redundancy would be valuable (probably not RAID0) assuming space was available.
Wikipedia reports USB speeds to top out at 60MB/s where SATA supports up to 6GB/s (I don't know how reliable this is, I'm a bit out of my home territory with this topic).
These numbers suggest that up to 100 USB thumb drives could be connected with cumulative speed gains with each one.
Heres where I'm guessing the issues would come up:
- Who has enough USB sticks sitting around with enough total space to make this useful? I have a handful of old sticks, but only a total of 2GB between them. Anything larger than that I'm still actively using.
- Is it possible to search for and read information from several flash drives at once, particularly if they are all connected through USB or SATA.
- What are the limitations on the use of USB and SATA? Does the idea go against some of the fundamentals of how these ports work?
- If this is possible to do, has anyone prototyped it before?
There may be some fun to be had here.
Disk Utility is the means for creating RAID sets. And what is this RAID (Redundant Array of Inexpensive or Independent Disks) of which I speak? A mirrored RAID (known as RAID 1) contains the same. A RAID enclosure is a place to store several hard disks, external to your PC. Instead of keeping the hard disks inside the computer case, you can keep them outside. RAID itself stands for. An External RAID is a data storage technology that enables the user to connect two or more hard drives in the machine. It makes the array look like a single volume for better storage and superior performance. The RAID hard drive works in a diverse way with regard to the option chosen. WD 24TB My Book Duo Desktop RAID External Hard Drive, USB 3.1 - WDBFBE0240JBK-NESN 4.4 out of 5 stars 5,366. Only 18 left in stock - order soon. WD 12TB WD Elements Desktop Hard Drive, USB 3.0 - WDBWLG0120HBK-NESN & 2TB WD Elements Portable External Hard Drive, USB 3.0 - WDBU6Y0020BBK-WESN.
& picture excel. Seagate Expansion 4TB External USB 3.0 Hard Drive — $89.99 (List Price $119.99). Skyrim specs laptop. An external RAID array made up of multiple platter-based drives is worth considering, since it combines the. Ios open rar.
Redundant Array of Inexpensive Disks (RAID) is an implementation to either improve performance of a set of disks and/or allow for data redundancy. Reading and writing performance issues can be helped with RAID. RAID is made up of various levels. This article covers RAID Levels 4, 5 and 6 and how to implement them on a Linux system.RAID 4, 5 and 6 Overview
RAID 4, 5 and 6 are sometimes referred to as Disk Striping with Parity. Data is written to each disk one block at a time, just like in RAID 0. The difference is that in RAID 4, 5 and 6 there is also Parity.
PARITY
Usb Flash Drive Raid Array
Parity is used for data redundancy. The redundancy allows for the failure of a disk in the RAID Array and the data still be accessible.Parity works at the Bit level and is distributed according to the RAID Level.
- RAID 4 – Dedicated Disk
- RAID 5 – Distributed by block
- RAID 6 – Distributed two blocks per stripe
Parity checks the bits in the corresponding blocks and then sets the bits to make the total number of 'on' bits to be even. For example, if three disks are used and we take five bits from each, the parity would be:
External Usb Raid Array Setup
- Disk 1: 01001
- Disk 2: 10001
HARDWARE
For RAID 4 and 5, three or more disks are required, but RAID 6 requires a minimum of four disks. The second block of Parity for RAID 6 takes up more space, but allows for the rebuilding of a failed RAID 6 Array faster than RAID 5. If a second disk fails during the rebuild, all data is lost.
To create the RAID Array, I will use three USB drives called BLUE, ORANGE and GREEN. I named them from the color of the thumb drive. The drives are all Sandisk Cruzer Switches which are USB 2.0 compliant and have a storage of 4 GB (3.7GB).
NOTE: When dealing with RAID arrays, all disks should be the same size. If they are not, they must be partitioned to be the same size. The smallest drive in the array sets the usable size of all of the disks.
I placed all three USB sticks in the same hub and tested the write speed. A file was written to each and timed. The size of the file was 100 MB and took an average time of 11.5 seconds making the average write speed 8.70 MB/sec. I performed a read test and had an average read time of 3.5 seconds making the average read time of 28.6 MB/sec.
To set up the RAID Array, you use the command 'mdadm'. If you do not have the file on your system, you will receive an error in a terminal when you enter the command 'mdadm'.
To get the file on your system use Synaptic or the like, for your Linux distro.
Once installed, you are ready to make a RAID 4, 5 or 6 Array.
Creating the RAID Array
Open a terminal and type 'lsblk' to get a list of your available drives. Make a note of the drives you are using so you do not type in the wrong drive and add it to the Array.
NOTE: Entering the wrong drive can cause a loss of data.
From the listing of the command from above, I am using sdb1, sdd1 and sde1. The command is as follows:
sudo mdadm --create /dev/md0 --level=4 --raid-devices=3 /dev/sdb1 /dev/sdd1 /dev/sde1 --verbose
The command creates (--create) a RAID Array called md0. The RAID Level is 4 and three devices are being used to create the RAID Array – sdb1, sdd1 and sde1.
NOTE: Simply change the 'level=' to either 4, 5 or 6 for the RAID Level you want to create.
The following should occur:
[email protected] ~ $ sudo mdadm --create /dev/md0 --level=4 --raid-devices=3 /dev/sdb1 /dev/sdd1 /dev/sde1 --verbose
mdadm: chunk size defaults to 512K
mdadm: /dev/sdb1 appears to be part of a raid array: level=raid0 devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: partition table exists on /dev/sdb1 but will be lost or meaningless after creating array
mdadm: /dev/sdd1 appears to be part of a raid array: level=raid0 devices=0 ctime=Wed Dec 31 19:00:00 1969
mdadm: partition table exists on /dev/sdd1 but will be lost or meaningless after creating array
mdadm: /dev/sde1 appears to contain an ext2fs file system size=3909632K mtime=Wed Nov 2 17:27:52 2016
mdadm: size set to 3907072K
Continue creating array?
NOTE: If you get an error that the device is busy, then remove 'dmraid'. In a Debian system use the command 'sudo apt-get remove dmraid' and when completed, reboot the system. After the system restarts, try the 'mdadm' command again. You also have to use 'umount' to unmount the drives.
Answer 'y' to the question to 'Continue creating array?' and the following should appear:
mdadm: Defaulting to version 1.2 metadata
mdadm: array /dev/md0 started.
The RAID Array is created and running, but not yet ready for use.
Prepare md0 for use
You may look around, but the drive md0 is not to be found. Open the GParted application and you will see it there ready to be prepared for use.
By selecting /dev/md0 you will get an error that no Partition Table exists on the RAID Array. Select Device from the top menu and then 'Create Partition Table…'. Specify your partition type and click APPLY.
Now, create the Partition and select your file format to be used. It is suggested to use either EXT3 or EXT4 for formatting the Array. You may also want to select the RAID Flag. Add the Partition scheme. I gave a Label of 'RAID 4' and then clicked APPLY to make all the selected changes. The drives should be formatted as selected and the RAID Array is ready to be mounted for use.
Mount RAID Array
Before closing GParted, look at the Partition name as shown in Figure 1. My Partition name is '/dev/md0p1'. The partition name is important for mounting.
FIGURE 01
You may be able to simply mount 'RAID 4' as I was able to do.
If the mount does not work, then try the following. Go to your '/media' folder and as ROOT create a folder, such as RAID, to be used as a mount point. In a terminal, use the command 'sudo mount /dev/md0p1 /media/RAID' to mount the RAID Array as the media device named RAID.
Now you must take ownership of the RAID Array with the command:
sudo chown -R jarret:jarret /media/RAID
The command uses my username (jarret) and group name (jarret) to take ownership of the mounted RAID Array. Use your own username and mount point.
Now, when I write to the Raid Array my time to write a 100 MB file is an average of 11.33 seconds. The speed to write is now 8.83 MB/sec. Reading a 100 MB file from the RAID Array takes an average of 4 seconds which makes a speed of 25 MB/sec.
As you can see, the speed has dramatically changed (write: 8.70 MB/s to 8.83 MB/s and read: 28.6 MB/s to 25 MB/s). Do remember, if one drive of the Array is removed or fails, the redundancy of the data is lost, but the data still available.
NOTE: The speed may be increased by placing each drive on a separate USB ROOT HUB. To see the number of ROOT HUBs you have and where each device is located, use the command 'lsusb'.
Auto Mount the RAID Array
To have the RAID Array auto mount after each reboot is a simple task. Run the command 'blkid' to get the needed information from the RAID Array. For example, to run it after I mounted my RAID mount point, I would get the following:
/dev/sda2: UUID='73d91c92-9a38-4bc6-a913-048971d2cedd' TYPE='ext4'
/dev/sda3: UUID='9a621be5-750b-4ccd-a5c7-c0f38e60fed6' TYPE='ext4'
/dev/sda4: UUID='78f175aa-e777-4d22-b7b0-430272423c4c' TYPE='ext4'
/dev/sda5: UUID='d5991d2f-225a-4790-bbb9-b9a48e691061' TYPE='swap'
/dev/sdb1: LABEL='GREEN' UUID='5914-5431' TYPE='vfat'
/dev/sdd1: LABEL='ORANGE' UUID='4C76-7987' TYPE='vfat'
/dev/sdc1: LABEL='My Book' UUID='54D8D96AD8D94ABE' TYPE='ntfs'
/dev/sde1: UUID='fb783956-17f6-6eda-a45b-150a56e5af70' UUID_SUB='34f799ec-979e-93ec-b8cd-d3f3b7fb5d28' LABEL='Symple- PC:0' TYPE='linux_raid_member'
/dev/md0p1: LABEL='RAID 4' UUID='a07e8b6a-670a-4465-b3a4- 39387f19d21e' TYPE='ext4'
The needed information is the line with the partition '/dev/md0p1'. The Label is RAID 4 and the UUID is 'a07e8b6a-670a-4465-b3a4-39387f19d21e' and the type is EXT4.
Edit the file '/etc/fstab' as ROOT using an editor you prefer and add a line similar to 'UUID= a07e8b6a-670a-4465-b3a4-39387f19d21e /media/RAID ext4 defaults 0 0'. Here the UUID is used from the blkid command. The mount point of '/media/RAID' shows where the mount point is located. The drive format of ext4 is used. Use the word 'defaults' and then '0 0'. Be sure to use a TAB between each set of commands.
Your RAID 4 drive Array should now be completely operational for use.
NOTE: Looking at the two lines before the /dev/md0p1 you can see that the UUIDs are SUBs and the TYPE is a 'linux_raid_member'. The listing allows you to see the two original devices being used in RAID Array 4.
Removing the RAID Array
To stop the RAID Array, you need to unmount the RAID mount point then stop the device 'md0p1' as follows:
Raid With Usb External Drives
sudo umount -l /media/RAID
sudo mdadm --stop /dev/md0p1
Once done, you need to reformat the drives and also remove the line from /etc/fstab which enabled it to be be automounted.
Fixing a broken RAID Array
If one of the drives should fail, you can easily replace the drive with a new one and restore the data to it.
Now, let's say from the above, drive sde1 fails. If I enter the 'lsblk' command, the drive sdb1 and sdd1 are shown and still listed as 'md0p1'. The device RAID is still accessible and usable. The Fault Tolerance is unavailable since only the two drives remain.
To determine the faulty drive, use the command: 'cat /proc/mdstat'.
The line which shows '[U_]', with the underscore, shows a break in the RAID Array. The line that shows (F) shows the drive which was the failure. So, you know to remove the failed drive and replace it.
To fix a broken RAID Array, replace the failed drive with a new drive that has a minimum space of the previous drive. After adding a new drive, run 'lsblk' to find the address of the new drive. Say for example it is 'sde1'. First, unmount the drive by using its label with the command 'umount /media/jarret/label'.
To join the new drive to the existing broken RAID 1 Array, the command is:
sudo mdadm --manage /dev/md0p1 --add /dev/sdf1
The RAID partition name is 'md0p1', as shown previously in GParted. The device to add is 'sdf1'.
To see the progress of the rebuild, use the command 'cat /proc/mdstat'.
At any time, the command 'cat /proc/mdstat' can be used to see the state of any existing RAID Array.
If you must remove a drive, you can tell the system that the device has failed. For instance, if I wanted to remove drive sde1 because it was making strange noises and I was afraid it would fail soon, the command would be:
sudo mdadm --manage /dev/md0p1 --remove /dev/sde1
The command 'cat /proc/mdstat' should show the Array has failed. Before you just unplug the device, you need to tell the system to remove it from the Array. The command would be:
sudo mdadm --manage /dev/md0p1 --remove /dev/sde1
You can now remove the drive, add a new one and rebuild the Array as described above.
Hope this helps you understand the RAID 4, 5 and 6 Arrays. Enjoy your RAID Array!