tech:

taffy

Stochastic gradient descent

Stochastic gradient descent (SGD) is a popular optimization algorithm in machine learning, specifically in training deep learning models.

The goal of SGD, like other optimization algorithms, is to find the optimal parameters (e.g., weights and biases in a neural network) that minimize the loss function, a measure of the model’s error on the training data. Its primary purpose is to minimize the error or loss function of the model, thereby improving the model’s performance.

The term ‘stochastic’ in SGD comes from the fact that the gradient based on a single example is a ‘stochastic approximation’ of the true gradient. This means it’s a noisy but unbiased estimate. This noise can help the algorithm jump out of shallow local minima of the loss function, which can be beneficial for finding better and potentially global minima.

Here’s a simplified explanation of how SGD works:

  1. Initialize the parameters (weights and biases in the case of neural networks) with random values.
  2. Randomly pick a single data point (or a mini-batch) from the dataset.
  3. Compute the gradient of the loss function with respect to the parameters for that data point. The gradient indicates the direction in which the loss is increasing most rapidly.
  4. Update the parameters by a small step in the opposite direction of the gradient. The size of this step is determined by the learning rate, a hyperparameter that controls how quickly the model learns.
  5. Repeat steps 2-4 until the algorithm converges to a minimum, which is when the loss can no longer be significantly reduced.

SGD is computationally efficient, especially for large datasets, because it only uses a single data point (or a small subset) at each iteration. Moreover, the randomness in SGD (from the random selection of data points) can help prevent the algorithm from getting stuck in suboptimal local minima and help it find the global minimum.

It’s important to note that SGD requires careful tuning of the learning rate and other hyperparameters. Furthermore, while SGD’s randomness can be an advantage, it can also cause the loss to fluctuate significantly, leading to a less stable convergence. There are variants of SGD, such as SGD with momentum, AdaGrad, RMSprop, and Adam, which address some of these issues and are often used in practice.


 

Just in

Oso Semiconductor raises $5.2M

Oso Semiconductor has raised $5.2 million in seed funding. The round was led by Engine Ventures.

OpenAI launches ChatGPT Gov for U.S. government agencies — CNBC

It’s called ChatGPT Gov and was built specifically for U.S. government use; writes Hayden Field. 

DeepSeek’s popular AI app is explicitly sending US data to China — Wired

Users have already reported several examples of DeepSeek censoring content that is critical of China or its policies, writes Matt Burgess and Lily Hay Newman. 

DeepSeek hit with large-scale cyberattack, says it’s limiting registrations — CNBC

DeepSeek on Monday said it would temporarily limit user registrations “due to large-scale malicious attacks” on its services; writes Hayden Field.