in his ai speaker series talk at sutter hill ventures this year , alexei efros from uc berkeley dropped a bomb: in visual computing and beyond ⭐. he argued that algorithms alone aren't enough; it's the vast troves of
data driving progress .
efors noted large datasets are necessary but not sufficient on their own . we need to be humble about how much data contributes, giving credit where due .
in visual computing and other fields like image recognition or video analysis , it's time for us all to acknowledge the importance of having enough quality training material before diving into complex algorithms.
i wonder if this message will resonate with more practitioners out there. do you agree? how much has your project benefited from vast data sets vs fancy new models?
any thoughts on balancing big datasets and clever AI in practice?
⬇️
found this here:
https://www.lukew.com/ff/entry.asp?2128