[Ilugc] Password as an argument in SSH
l.mohanphy at gmail.com
Sat Mar 20 21:34:28 IST 2010
On Sat, Mar 20, 2010 at 9:12 PM, Varadharajan Mukundan <srinathsmn at gmail.com
> Hi all,
> Thanks for all those helpful advice. I think Hadoop usually ssh into
> localhost to Map and Reduce. I don't know, how it does and hence posted
Ok, You have to Setup passphrase less ssh.
HDFS has a master/slave architecture. An HDFS cluster consists of a single
NameNode, a master server. There are a number of DataNodes, usually one per
node in the cluster. The master(NameNode) node uses Secure Shell (SSH)
commands to start DataNodes.
Now check that you can ssh to the localhost without a passphrase:
$ ssh localhost
It should login without asking password. If not you have to setup
passphraseless ssh in your local box.
In a multi-node hadoop cluster, the master node uses Secure Shell (SSH)
commands to manipulate the remote nodes.
By default, if a user from NodeA wants to login to a remote NodeB by using
SSH, he will be asked the password for NodeB for authentication. However, it
is impossible to input
the authentication password every time the masternode wants to operate on a
slavenode. Under this circumstance, we must adopt public key authentication.
Simply speaking, every node will generate a pair of public key and private
key, and NodeA can login to NodeB without password authentication only if
NodeB has a copy of NodeA’s public key.In other words, if NodeB has NodeA’s
public key, NodeA is a trusted node to NodeB . In hadoop cluster, all the
slavenodes must have a copy of masternode’s public key.
Thanks & Rg
More information about the ilugc