How do models trained with Lightly compare with other approaches wrt adversarial robustness?
Can using Lightly introduce additional bias in the model, since only a select few of inputs are being labeled?
This may be a concern for publicity purposes.
By the way, I thought ETH spinoff requirements were incompatible with YC requirements - nice to see it can be made to work.
Thanks for the interest and great questions. Responses are below:
>How do models trained with Lightly compare with other approaches wrt adversarial robustness?
We have no benchmark available. Both approaches can be combined. You can use Lightly to pick a diverse subset, label it and then during training/ evaluating the model check for adversarial robustness and re-iterate.
>Can using Lightly introduce additional bias in the model, since only a select few of inputs are being labeled? This may be a concern for publicity purposes.
If we remove bias we automatically introduce bias. BUT we want the introduced bias to be controlled and known.
Bias typically comes from the way we collect data. For example, more data is being collected during the day than during night for autonomous driving. We also have more data collected during sunny weather than rain or snow. We also have more data from cities like San Fransisco than cities like New Mexico. Most of our datasets are biased.
> By the way, I thought ETH spinoff requirements were incompatible with YC requirements - nice to see it can be made to work.
From what we know we are the first ETH spin-off who is part of the YC program. we hope they don't abandon us.
Can using Lightly introduce additional bias in the model, since only a select few of inputs are being labeled? This may be a concern for publicity purposes.
By the way, I thought ETH spinoff requirements were incompatible with YC requirements - nice to see it can be made to work.