Python Project Search

Decentralized System

The Need For A Method To Create A Collaborative Machine Learning Model Which Can Utilize Data From Different Clients, Each With Privacy Constraints, Has Recently Emerged. This Is Due To Privacy Restrictions, Such As General Data Protection Regulation, Together With The Fact That Machine Learning Models In General Needs Large Size Data To Perform Well. Google Introduced Federated Learning In 2016 With The Aim To Address This Problem. Federated Learning Can Further Be Divided Into Horizontal And Vertical Federated Learning, Depending On How The Data Is Structured At The Different Clients. Vertical Federated Learning Is Applicable When Many Different Features Is Obtained On Distributed Computation Nodes, Where They Can Not Be Shared In Between. The Aim Of This Thesis Is To Identify The Current State Of The Art Methods In Vertical Federated Learning, Implement The Most Interesting Ones And Compare The Results In Order To Draw Conclusions Of The Benefits And Drawbacks Of The Different Methods. From The Results Of The Experiments, A Method Called FedBCD Shows Very Promising Results Where It Achieves Massive Improvements In The Number Of Communication Rounds Needed For Convergence, At The Cost Of More Computations At The Clients. A Comparison Between Synchronous And Asynchronous Approaches Shows Slightly Better Results For The Synchronous Approach In Scenarios With No Delay. Delay Refers To Slower Performance In One Of The Workers, Either Due To Lower Computational Resources Or Due To Communication Issues.

Leave your Comment's here..

Review form
1 star 2 star 3 star 4 star 5 star