Saturday, September 29, 2012

Linux Interview Questions

Hello Friends, I faced many Linux Interview and my personal experience says you should prepare following question all the time, so let prepare following questions and crack interview.

1. Define Linux Boot Process ?
2. What is Initrd ram and why we use?
3. What is Linux Filesystem ? How many types Files system linux support?
4. What is the different between EXT2, EXT3 and EXT4.
5. ext3 journaling file system?
5. What is minimum requirement for RHEL 4/5/6 installation ?
6. What is RAID and Its' type? 1+0 and 0+1 etc.
7. What is LD_LIBRARY_PATH?

Thursday, September 20, 2012


 

 

Red Hat Enterprise Linux 6 technology capabilities and limits

  Version 3
Version 4
Version 5
Version 6
Maximum logical CPUs [2]        
x86 16 32 32 32
Itanium 2 8 256/512 256/1024 N/A[6]
x86_64 8 64/64 160/255 160/4096
POWER 8 64/128 128/128 128
System z 64 (z900) 64 (z10 EC) 80 (z196) 80 (z196)
Maximum memory[5]        
x86 64GB[3] 64GB[3] 16GB[4] 16GB[4]
Itanium 2 128GB 2TB 2TB N/A[6]
x86_64 128GB 256GB/1TB 1TB 2TB/64TB
POWER 64GB 128GB/1TB 512GB/1TB 2TB
System z 256GB (z900) 1.5TB (z10 EC) 3TB (z196) 3TB (z196)
Required minimums        
x86 256MB 256MB 512MB minimum/
1 GB/logical CPU recommended
512MB minimum/
1 GB/logical CPU recommended
x86_64 256MB 256MB 512MB minimum/
1 GB/logical CPU recommended
1GB minimum/
1 GB/logical CPU recommended
Itanium 2 512MB 512MB 512MB/
1 GB/logical CPU recommended
N/A[6]
POWER 512MB 512MB 1GB minimum/
2GB recommended
2GB minimum/
2GB required per install
Minimum diskspace 800MB 800MB 1GB minimum/
5GB recommended
1GB minimum/
5GB recommended
File systems and storage limits        
Maximum filesize (Ext3) 2TB 2TB 2TB 2TB
Maximum file system size (Ext3) 2TB 8TB 16TB 16TB
Maximum file size (Ext4) -- -- 16TB 16TB
Maximum file system size (Ext4) -- -- 16TB 16TB
Maximum file size (GFS) 2TB 16TB/8EB 16TB/8EB[7] N/A
Maximum file system size (GFS) 2TB 16TB/8EB 16TB/8EB[7] N/A
Maximum file size (GFS2) -- -- 25TB 100TB
Maximum file system size (GFS2) -- -- 25TB 100TB
Maximum file size (XFS) -- -- 100TB 100TB
Maximum file system size (XFS) -- -- 100TB 100TB
Maximum Boot LUN size (BIOS) -- -- <2TB <2TB[10]
Maximum Boot LUN size (UEFI) -- -- N/A Any[10]
Maximum x86 per-process virtual address space Approx. 4GB Approx. 4GB Approx. 3GB[4] Approx. 3GB[4]
Maximum x86_64 per-process virtual address space   512GB 2TB 128TB
Kernel and OS features        
Kernel foundation Linux 2.4.21 Linux 2.6.9 Linux 2.6.18 2.6.32 - 2.6.34
Compiler/toolchain GCC 3.2 GCC 3.4 GCC 4.1 GCC 4.4
Languages supported 10 15 19 22
NIAP/CC certified[11] Yes (3+) Yes (4+) Yes (4+) Under evaluation
Common Criteria certified KVM[11] -- -- Under evaluation Under evaluation
IPv6 -- -- Ready Logo Phase 2 Under evaluation
Compatibility libraries V2.1 V2.1 and V3 V3 and V4 V4 and V5
FIPS certified[11] -- -- Yes Under evaluation
Common Operating Environment (COE) compliant Yes Yes N/A N/A
LSB-compliant Yes - 1.3 Yes - 3 Yes - 3.1 Under evaluation
GB18030 No Yes Yes Yes
Client environment        
Desktop GUI Gnome 2.2 Gnome 2.8 Gnome 2.16 Gnome 2.28
Graphics XFree86 X.org X.org 7.1.1 X.org 7.4
OpenOffice V1.1 V1.1.2 V2.0.4[12] V3.2[12]
Gnome Evolution V1.4 V2.0 V2.8.0 V2.28
Default browser Mozilla Firefox Firefox 1.5[12] Firefox 3.6[12]
Notes:
  1. Supported limits reflect the current state of system testing by Red Hat and its partners for mainstream hardware. Systems exceeding these supported limits may be included in the Hardware Catalog after joint testing between Red Hat and its partners. If they exceed the supported limits posted here, entries in the Hardware Catalog will include a reference to the details of the system-specific limits and are fully supported. In addition to supported limits reflecting hardware capability, there may be additional limits under the Red Hat Enterprise Linux subscription terms. Supported limits are subject to change as ongoing testing completes.
  2. Red Hat defines a logical CPU as any schedulable entity. So every core/thread in a multicore/thread processor is a logical CPU.
  3. The "SMP" kernel supports a maximum of 16GB of main memory. Systems with more than 16GB of main memory use the "Hugemem" kernel. In certain workload scenarios it may be advantageous to use the "Hugemem" kernel on systems with more than 12GB of main memory.
  4. The x86 "Hugemem" kernel is not provided in Red Hat Enterprise Linux 5 or 6.
  5. The architectural limits are based on the capabilities of the RHEL kernel and the physical hardware. RHEL 6 limit is based on 46-bit physical memory addressing. RHEL 5 limit is based on 40-bit physical memory addressing. All system memory should be balanced across NUMA nodes in a NUMA-capable system.
  6. Red Hat Enterprise Linux 6 does not include support for the Itanium 2 architecture.
  7. If there are any 32-bit machines in the cluster, the maximum gfs file system size is 16TB. If all machines in the cluster are 64-bit, the maximum size is 8EB.
  8. Officially support 125 CPUs across the entire machine.
  9. Requires Intel EPT and AMD RVI technology support.
  10. UEFI and GPT support required for more that 2TB boot LUN support (https://access.redhat.com/kb/docs/DOC-16981).
  11. Get security certification details.

 
Reference: http://www.redhat.com/resourcelibrary/articles/articles-red-hat-enterprise-linux-6-technology-capabilities-and-limits

Tuesday, September 11, 2012

FeaturesRAID 0RAID 1RAID 1ERAID 5RAID 5EE
Minimum # Drives22334
Data ProtectionNo ProtectionSingle-drive failureSingle-drive failureSingle-drive failureSingle-drive failure
Read PerformanceHighHighHighHighHigh
Write PerformanceHighMediumMediumLowLow
Read Performance (degraded)N/AMediumHighLowLow
Write Performance (degraded)N/AHighHighLowLow
Capacity Utilization100%50%50%67% - 94%50% - 88%
Typical ApplicationsHigh End Workstations, data logging, real-time rendering, very transitory dataOperating System, transaction databasesOperating system, transaction databasesData warehousing, web serving, archivingData warehousing, web serving, archiving

FeaturesRAID 6RAID 10RAID 50RAID 60
Minimum # Drives4468
Data ProtectionTwo-drive failureUp to one disk failure in each sub-arrayUp to one disk failure in each sub-arrayUp to two disk failure in each sub-array
Read PerformanceHighHighHighHigh
Write PerformanceLowMediumMediumMedium
Read Performance (degraded)LowHighMediumMedium
Write Performance (degraded)LowHighMediumLow
Capacity Utilization50% - 88%50%67% - 94%50% - 88%
Typical ApplicationsData archive, backup to disk, high availability solutions, servers with large capacity requirementsFast databases, application serversLarge databases, file servers, application serversData archive, backup to disk, high availability solutions, servers with large capacity requirements
Types of RAID
Types of RAIDSoftware-BasedHardware-BasedExternal Hardware
DescriptionBest used for large block applications such as data warehousing or video streaming. Also where servers have the available CPU cycles to manage the I/O intensive operations certain RAID levels require.

Included in the OS, such as Windows®, Netware, and Linux. All RAID functions are handled by the host CPU which can severely tax its ability to perform other computations.
Best used for small block applications such as transaction oriented databases and web servers.

Processor-intensive RAID operations are off-loaded from the host CPU to enhance performance.

Battery-back write back cache can dramatically increase performance without adding risk of data loss.
Connects to the server via a standard adapter. RAID functions are performed on a microprocessor located on the external RAID adapter independent of the host.
AdvantagesLow price

Only requires a standard adapter
Data protection and performance benefits of RAID

More robust fault-tolerant features and increased performance versus software-based RAID
OS independent

Build high-capacity storage systems for highend servers


RAID 0

RAID 0, also called striping, is a scheme where data is divided into blocks and distributed across the drives in the array. This level does not provide redundancy, so consequently it has the best overall performance. For this reason, it is not suitable for mission critical situations, but is best used in situations where improved performance is the primary driver.
RAID level 0

RAID 1

There are two implementations of RAID 1: mirroring and duplexing. With both schemes, data is duplicated on a second disk. Mirroring uses a single drive controller, while duplexing uses two controllers. In the event of a single drive failure, data can be read or written from the other drive, providing fault tolerance. And in the case of duplexing, even a controller failure won't bring down the system. However, there won't be any performance improvements, but instead, a potential decrease in write performance.
RAID level 1

RAID 5

RAID 5 is called striping with distributed parity. Data and parity or error detection and correction code,  is striped across three or more drives. Parity is stored on a dedicated drive, which results in less available storage space. If a drive fails, data can be recovered from the remaining data blocks and the parity information. This level also features improved read and write performance because data can be read or written simultaneously across multiple drives.
RAID level 5

RAID 0+1 (01)

This is the first of the hybrids, which simply combines other RAID levels. RAID 0+1 (also called 01) mirrors and stripes data simultaneously. Combining striping and mirror marries high performance with fault tolerance, making this one of the most popular levels. Under this scheme, two striped arrays are created, and one acts as the mirror to the other. This requires a minimum of four drives.
RAID level 0 1

RAID 1+0 (10)

With this RAID level, data is mirrored and striped simultaneously. Most often, this is implemented with four drives, and one mirrored drive set is striped. This provides even higher fault tolerance and performance. RAID 10 differs from 01 in that things are reversed. Where 01 is a mirror of stripes, 10 is a stripe of mirrors.
RAID level 10