Rapid application development (RAD) is the merger of various structured techniques (especially the data—driven information engineering) with prototyping techniques and joint application development techniques to accelerate systems development.
RAD calls for the interactive use of structured techniques and prototyping to define the users‘ requirements and design the final system. Using structured techniques, the developer first builds preliminary data and process models of the business requirements. Prototypes then help the analyst and users to verify those requirements and to formally refine the data and process models. The cycle of models, then prototypes, then models, then prototypes, and so forth ultimately results in a combined business requirements and technical design statement to be used for constructing the new system.
RAID, an acronym for Redundant Array of Independent Disks or also known as Redundant Array of Inexpensive Disks, is a technology that allows high levels of storage reliability from low-cost and less reliable PC-class disk-drive components, via the technique of arranging the devices into arrays for redundancy. The basic idea behind RAID is to combine multiple small, inexpensive disk drives into an array which yields performance exceeding that of one large and expensive drive. This array of drives will appear to the computer as a single logical storage unit or drive. RAID is a method in which information is spread across several disks, using techniques such as disk striping (RAID Level 0) and disk mirroring (RAID level 1), memory-style-error-correcting-code(EEC) (RAID level 2), bit-interleaved parity (RAID level 3), block-interleaved parity ( RAID level 4) , block-interleaved distributed parity (RAID level 5) to achieve redundancy, lower latency and/or higher bandwidth for reading and/or writing to disks, and maximize recoverability from hard-disk crashes. The underlying concept in RAID is that data may be distributed across each drive in the array in a consistent manner. To do this, the data much first be broken into consistently-sized “chunks” (often 32K or 64K in size, although different sizes can be used). Each chunk is then written to each drive in turn. When the data is to be read, the process is reversed, giving the illusion that multiple drives are actually one large drive.