admin管理员组文章数量:1295884
Consider an optimization engine that uses target functions and constraints that requires smooth functions (with at least first-order continuous derivative) to work well.
Now consider a boolean constraint function f(x) that is not satisfied in range of values (-inf, 0] and is satisfied in (0, +inf). If the constraint is not satisfied, the optimization task does not converge and the engine fails.
Choosing f(x) that directly computes a boolean value with &&
operators, 0 or 1, out of n constrained variables x1, x2, ..., xn will be discontinuous and non-smooth, resulting in poor optimization engine performance.
However, one way to "smoothen" the AND operation with two operands in C++ is
double AND(double x, double y)
{
return x + y - std::hypot(x, y);
}
which should ideally return positive values when both x
and y
are positive, non-positive values otherwise.
Now, my question is, how would one generalize this function for n > 2 variables, such that it maintains smoothness and its return value properties?
A naive approach that works is to chain N applications of this function for each variable, but I am worried about future performance issues due to std::hypot()
and round-off errors.
P.S. the variables x1, x2, ..., xn are of order of magnitude around [1.e-5, 1], i.e. not extremely small.
Consider an optimization engine that uses target functions and constraints that requires smooth functions (with at least first-order continuous derivative) to work well.
Now consider a boolean constraint function f(x) that is not satisfied in range of values (-inf, 0] and is satisfied in (0, +inf). If the constraint is not satisfied, the optimization task does not converge and the engine fails.
Choosing f(x) that directly computes a boolean value with &&
operators, 0 or 1, out of n constrained variables x1, x2, ..., xn will be discontinuous and non-smooth, resulting in poor optimization engine performance.
However, one way to "smoothen" the AND operation with two operands in C++ is
double AND(double x, double y)
{
return x + y - std::hypot(x, y);
}
which should ideally return positive values when both x
and y
are positive, non-positive values otherwise.
Now, my question is, how would one generalize this function for n > 2 variables, such that it maintains smoothness and its return value properties?
A naive approach that works is to chain N applications of this function for each variable, but I am worried about future performance issues due to std::hypot()
and round-off errors.
P.S. the variables x1, x2, ..., xn are of order of magnitude around [1.e-5, 1], i.e. not extremely small.
Share Improve this question edited Feb 13 at 8:38 Mampac asked Feb 12 at 10:01 MampacMampac 3511 gold badge3 silver badges15 bronze badges 27 | Show 22 more comments1 Answer
Reset to default 0The question sounds like a homework assignment from an AI starting lecture ;) Just to generalize the AND function would result in extending the x + y term to a summand term above all dimensions minus the Euclidean norm (hypot function). In order to define a smooth function, which also reduces the risk of precision loss during the calculation of the Euclidean norm, you might use the softmin function instead the latter. Its differentiable by using an exponential weighting as well as its smoothness can be controlled by a separate parameter (tau). For more details have look here at the PyTorch documentation: https://pytorch./docs/stable/generated/torch.nn.Softmin.html (sorry that I've no C++ link at hand, but those stuff is more often used on Python ML libraries).
#include <iostream>
#include <vector>
#include <cmath>
#include <numeric>
double soft_min(const std::vector<double>& x, double tau = 0.1) {
double sum_weighted = 0.0, sum_exp = 0.0;
for (double xi : x) {
double exp_term = std::exp(-xi / tau);
sum_weighted += xi * exp_term;
sum_exp += exp_term;
}
return sum_weighted / sum_exp;
}
double smooth_and_softmin(const std::vector<double>& x, double tau = 0.1) {
double sum_x = std::accumulate(x.begin(), x.end(), 0.0);
return sum_x - soft_min(x, tau);
}
int main() {
// example data
std::vector<double> x = {0.5, 0.8, 0.9};
std::cout << "Smooth AND with SoftMin : " << smooth_and_softmin(x) << std::endl;
// output 1.67916
return 0;
}
本文标签: Smooth numerical AND function with N gt 2 parameters for a C optimization engineStack Overflow
版权声明:本文标题:Smooth numerical AND function with N > 2 parameters for a C++ optimization engine - Stack Overflow 内容由网友自发贡献,该文观点仅代表作者本人, 转载请联系作者并注明出处:http://www.betaflare.com/web/1741609201a2388158.html, 本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如发现本站有涉嫌抄袭侵权/违法违规的内容,一经查实,本站将立刻删除。
return x*y
instead which might generalise more easily? – Martin Brown Commented Feb 12 at 10:16abs
would not qualify. They could want more, so they should specify what their criterion for smooth is. – Eric Postpischil Commented Feb 12 at 12:19signbit
is not a continuous function. – Eric Postpischil Commented Feb 12 at 18:17