Oracle recommends an odd number of Voting Disks
è A
node must be able to strictly access more than half of the voting disks at any
time
è In case
of 1 voting disk available and it goes down/corrupted/failure, immediately the
cluster stops functioning.
è More
than half of voting disks configured must always be available and responsive
for Oracle Clusterware to operate properly.
è We must
configure at least ‘2N+1′ voting disks to survive from loss.
Few Scenario:
When you have number of voting disk available
- 1
voting disk and you lose it, the cluster stops functioning.
- 2
voting disk and 1 voting disk you lose, as per majority rule cluster stop
functioning
- 3
voting disk and 1 voting disk you lose, you still have 2, the cluster runs
fine.
- 3
voting disk and 2 voting disk you lose, the cluster stops.
- 4
voting disk and 1 goes bad, you still have 3, cluster run fine
This is the reason, when using Oracle for the redundancy
of your voting disks, Oracle strongly recommends that customers use 3 or more
voting disks.
No comments:
Post a Comment