Ethereum is currently on a run, crushing one all-time high after another and winning substantial market share from Bitcoin. One of the reasons for this might be its rich ecosystem consisting of decentralized applications in various fields, such as Decentralized Finance (DeFi) or Non-fungible Tokens (NFT).
Especially DeFi offers huge potentials for returns that even outperform stock market investments. However, there is one big catch: Due to the popularity of the previously mentioned fields, the Ethereum network is highly congested and transaction fees are exceeding profit margins for smaller investors.
But no worries, in this article I present you a…
Note from the editors: This article is for educational and entertainment purposes only. If you want to use the presented model for real bets, do so at your own risk. Please make sure that this is in alignment with the terms and conditions of your bookmaker.
With the outbreak of the illness and the corresponding shutdown of the economy, millions of people, unfortunately, lost their jobs. Desperate times call for desperate measures and we might be interested in creating new unconventional sources of income. …
Last fall while struggling to fine tune the pre-trained multilingual BERT model for argumentation mining (detecting argumentative structures in text) in the context of my Master’s Thesis, I stumbled across the open source framework FARM (Framework for Adapting Representation Models) by Deepset.ai. Not only did they provide a German BERT model, but also an easily implementable framework with extensive features for transfer learning. I wouldn’t say that it saved my thesis, but at least a lot of nerves and hair on my head 😅.
In this short article, I describe how to split your dataset into train and test data for machine learning, by applying sklearn’s train_test_split function. I use the data frame that was created with the program from my last article. The data is based on the raw BBC News Article dataset published by D. Greene and P. Cunningham .
Feel free to check out the source code here if you’re interested.
If you missed my first guide to extract information from text files, you might want to check it out to get a better understanding of the data we are dealing…
In this article, I describe how to transform a set of text files into a data table which can be used for natural language processing and machine learning. To showcase my approach I use the raw BBC News Article dataset published by D. Greene and P. Cunningham in 2006.
Before jumping into the IDE and start coding, I usually follow a process consisting of understanding the data, defining an output, and translating everything into code. I consider the tasks before coding usually as the most important since they help to structure and follow the coding process more efficiently.
Software engineer with business degree, rock climber and lifelong learner from Switzerland.