Symptom:
Datanode is started but it could not send heartbeat to namenode, and finally it becomes dead.From the datanode log, we can find below error message:
INFO org.apache.hadoop.ipc.Server: IPC Server Responder: starting
INFO org.apache.hadoop.ipc.Server: IPC Server listener on 50020: starting
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data1/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data2/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data3/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data4/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data5/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data6/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data7/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
ERROR org.apache.hadoop.hdfs.server.common.Storage: It appears that another namenode 668863@xxx.xxx has already locked the storage directory
INFO org.apache.hadoop.hdfs.server.common.Storage: Cannot lock storage /data8/dfs/dn/dfs/data. The directory is already locked
WARN org.apache.hadoop.hdfs.server.common.Storage: Ignoring storage directory /data8/dfs/dn/dfs/data due to an exception
java.io.IOException: Cannot lock storage /data8/dfs/dn/dfs/data. The directory is already locked
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.lock(Storage.java:636)
at org.apache.hadoop.hdfs.server.common.Storage$StorageDirectory.analyzeStorage(Storage.java:459)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:152)
at org.apache.hadoop.hdfs.server.datanode.DataStorage.recoverTransitionRead(DataStorage.java:219)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:848)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:819)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:744)
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data9/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
INFO org.apache.hadoop.hdfs.server.common.Storage: Lock on /data10/dfs/dn/dfs/data/in_use.lock acquired by nodename 668863@xxx.xxx
Since "dfs.datanode.failed.volumes.tolerated" by default is set to 0, so below errors are following:FATAL org.apache.hadoop.hdfs.server.datanode.DataNode: Initialization failed for block pool Block pool BP-xxx-xxx.xxx.x.x-xxxxxxx (storage id DS-xxx-192.168.xxx.x-xxxxx-xxxxxxx) service to namenode.OPENKB.INFO/192.168.xxx.2:8020
org.apache.hadoop.util.DiskChecker$DiskErrorException: Too many failed volumes - current valid volumes: 9, volumes configured: 10, volumes failed: 1, volume failures tolerated: 0
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetImpl.<init>(FsDatasetImpl.java:186)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:34)
at org.apache.hadoop.hdfs.server.datanode.fsdataset.impl.FsDatasetFactory.newInstance(FsDatasetFactory.java:30)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initStorage(DataNode.java:857)
at org.apache.hadoop.hdfs.server.datanode.DataNode.initBlockPool(DataNode.java:819)
at org.apache.hadoop.hdfs.server.datanode.BPOfferService.verifyAndSetNamespaceInfo(BPOfferService.java:280)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.connectToNNAndHandshake(BPServiceActor.java:222)
at org.apache.hadoop.hdfs.server.datanode.BPServiceActor.run(BPServiceActor.java:664)
at java.lang.Thread.run(Thread.java:744)
Env:
Hadoop 2.0Troubleshooting:
1. This issue seems some orphan process hold the lock on "/data8/dfs/dn/dfs/data/". However, "fuser /data8" and "lsof|grep /data8" show nothing.2. Restarting that problematic datanode service does not work.
3. Before starting the datanode service, the "in_use.lock" lock file does not exist; After starting the datanode, the "in_use.lock" is created by the running datanode process itself. This means that no other process is trying to lock the data directory.
4. Per "/org/apache/hadoop/hdfs/server/common/Storage.java":
public void lock() throws IOException { if (!useLock) { LOG.info("Locking is disabled"); return; } this.lock = tryLock(); if (lock == null) { String msg = "Cannot lock storage " + this.root + ". The directory is already locked."; LOG.info(msg); throw new IOException(msg); } } FileLock tryLock() throws IOException { File lockF = new File(root, STORAGE_FILE_LOCK); lockF.deleteOnExit(); RandomAccessFile file = new RandomAccessFile(lockF, "rws"); FileLock res = null; try { res = file.getChannel().tryLock(); } catch(OverlappingFileLockException oe) { file.close(); return null; } catch(IOException e) { LOG.error("Cannot create lock on " + lockF, e); file.close(); throw e; } return res; }From above code, we know that this error message "Cannot lock storage xxx" happens when the lock is null. And this happens only when "OverlappingFileLockException" is got.
Which let me believe that for some reason, the datanode process itself is trying to lock the "/data8" twice.
Root Cause:
Finally we found the root cause:On that datanode server, the /data8 was mounted to the same device(/dev/sdf1) as /data6.
Which means, /data8 and /data6 are alias for each other.
This explains why datanode process is trying to lock /data8 twice.
/dev/sdf1 1.8T 914G 826G 53% /data6 /dev/sdf1 1.8T 914G 826G 53% /data8
Solution:
1. Remove /data8 from all *-site.xml configurations on that problematic datanode server.2. Restart that datanode to skip /data8.
3. SysAdmin to fix the mount point issue.
4. Create needed directories on /data8.
5. Add the /data8 back into all *-site.xml.
6. Restart that datanode.
Note: This symptom looks similar as Data node becomes dead due to one disk failure, but the solution is totally different. The key is to find out the root cause instead of guessing and trying.
No comments:
Post a Comment