🏆Won 3rd Position at Distributed Hack Berlin by @exalsius and @flwrlabs on 14th - 15th November at Einstein Digital Center, Berlin. I have provided all the details about the model built below in the thread đŸ§”
We worked on the Track 1 of the Hackathon which was to train the model on the NIH Lung Dataset which is been distributed around three hospital in the scenario and the model needs to be trained particularly to that hospital and then have the aggregated model by using the federated learning approach using flowerlabs. The major constraint factor was of that each job of training will run only for 20 mins.
I built the EfficientNet-b0 with the pretrained network on the ImageNet 1K, as the focus point was to get the high accuracy in the less duration of the training time, the approaches used for building the model was Learning rate scheduler of 15% linear warmup + cosine decay (min_lr=0.05). The starting with a small lr so training doesn’t explode early then slowly reduce lr so the model trains gently at the end as per the cosine decay. This approach stabilize the early training and prevents the swings in the lr that could destabilize the tuning.
I did the approach of preserving the pretrained filers structures while adapting the single channel input. Also, made sure of the fast learning of the new head with the careful bias bound. The major thing which was to handle the catastrophic forgetting so I assigned the large lr for the new classifier and small lr for the pretrained backbone. Also, Configured the Autocast and the GradScalar which eventually helped to halve the memory and double the throughput. Mixed precision training (FP16) with GradScaler was used. To also maintain the stability and prevent from the exploiding gradients, the approach applied was to have the stronger clipping during the warmup and then hte lower clipping later on. I did consider that most of the medical records are in imbalance in the nature so I applied the Focal Loss (gamma=2.0, alpha=1.0) with class weighting.
7,74 k
16
Le contenu de cette page est fourni par des tiers. Sauf indication contraire, OKX n’est pas l’auteur du ou des articles citĂ©s et ne revendique aucun droit d’auteur sur le contenu. Le contenu est fourni Ă  titre d’information uniquement et ne reprĂ©sente pas les opinions d’OKX. Il ne s’agit pas d’une approbation de quelque nature que ce soit et ne doit pas ĂȘtre considĂ©rĂ© comme un conseil en investissement ou une sollicitation d’achat ou de vente d’actifs numĂ©riques. Dans la mesure oĂč l’IA gĂ©nĂ©rative est utilisĂ©e pour fournir des rĂ©sumĂ©s ou d’autres informations, ce contenu gĂ©nĂ©rĂ© par IA peut ĂȘtre inexact ou incohĂ©rent. Veuillez lire l’article associĂ© pour obtenir davantage de dĂ©tails et d’informations. OKX n’est pas responsable du contenu hĂ©bergĂ© sur des sites tiers. La dĂ©tention d’actifs numĂ©riques, y compris les stablecoins et les NFT, implique un niveau de risque Ă©levĂ© et leur valeur peut considĂ©rablement fluctuer. Examinez soigneusement votre situation financiĂšre pour dĂ©terminer si le trading ou la dĂ©tention d’actifs numĂ©riques vous convient.