A Distributed file system or commonly known as dfs is said to be a major part for setting up and making cloud computing applications. In such file systems such as dfs, nodes continuously working on computing functions and storage facilities and functions; then a file is Partitioned (divided into equal number of parts of equal size )into a number of chunks allocated in different nodes so that Map Reduce algorithm tasks can be applied over the nodes in parallel. However, in a environment such as a cloud computing environment, failure is very evident and often avoided, and nodes in such environment can be enhanced, replaced, or can be added to the current existing system. Other functions that can be done with files is that they can be created, deleted, and appended dynamically. This leads towards uneven distribution of load that is imbalance problem in a distributed file system; that is, the file parts(Chunks) are not divided between the nodes as uniformly as it should be divided in an ideal state . The present distributed file systems strongly depends upon a central node for reallocating the chunk parts to different nodes. This type of dependence is not considered as good in a large-scale and failure-prone environment as this here because the central load balancer is put under large amount of workload that is linearly scaled with the size of the system, and may cause the performance bottleneck and might become single point of failure. In this paper, technique is presented for improving efficiency of distributed file system using map-reduce model with rebalancing model in cloud platform is presented to deal with the load imbalance problem. Our algorithm when analyzed with the current approach in production systems and a solution is presented in the literature. The results show and illustrate that our algorithm is comparable with the current existing centralized load rebalancing algorithm and is considerably better than the previous distributed algorithms in factors of load imbalance, movement cost.