2016 Multi-University Training Contest 2 Acperience
2016-07-22 10:26
447 查看
Deep neural networks (DNN) have shown significant improvements in several application domains including computer vision and speech recognition. In computer vision, a particular type of DNN, known as Convolutional Neural Networks (CNN), have demonstrated state-of-the-art
results in object recognition and detection.
Convolutional neural networks show reliable results on object recognition and detection that are useful in real world applications. Concurrent to the recent progress in recognition, interesting advancements have been happening in virtual reality (VR by Oculus),
augmented reality (AR by HoloLens), and smart wearable devices. Putting these two pieces together, we argue that it is the right time to equip smart portable devices with the power of state-of-the-art recognition systems. However, CNN-based recognition systems
need large amounts of memory and computational power. While they perform well on expensive, GPU-based machines, they are often unsuitable for smaller devices like cell phones and embedded electronics.
In order to simplify the networks, Professor Zhang tries to introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights. Professor Zhang needs your help.
More specifically, you are given a weighted vector W=(w1,w2,...,wn).
Professor Zhang would like to find a binary vector B=(b1,b2,...,bn) (bi∈{+1,−1}) and
a scaling factor α≥0 in
such a manner that ∥W−αB∥2 is
minimum.
Note that ∥⋅∥ denotes
the Euclidean norm (i.e. ∥X∥=x21+⋯+x2n−−−−−−−−−−√,
where X=(x1,x2,...,xn)).
Input
There are multiple test cases. The first line of input contains an integer T,
indicating the number of test cases. For each test case:
The first line contains an integers n (1≤n≤100000) --
the length of the vector. The next line contains n integers: w1,w2,...,wn (−10000≤wi≤10000).
Output
For each test case, output the minimum value of ∥W−αB∥2 as
an irreducible fraction "p/q"
where p, q are
integers, q>0.
Sample Input
3
4
1 2 3 4
4
2 2 2 2
5
5 6 2 3 4
Sample Output
5/1
0/1
10/1
Author
zimpha
展开式子, \left|
W-\alpha B \right|^2=\displaystyle\alpha^2\sum_{i=1}^{n}b_i^2-2\alpha\sum_{i=1}^n{w_ib_i}+\sum_{i=1}^{n}w_i^2∥W−αB∥2=α2i=1∑nbi2−2αi=1∑nwibi+i=1∑nwi2.
由于b_i\in
{+1,-1}bi∈{+1,−1},
那么\displaystyle\sum_{i=1}^{n}b_i^2=ni=1∑nbi2=n,
显然c=\displaystyle\sum_{i=1}^{n}w_i^2c=i=1∑nwi2也是常数.
转化成求\displaystyle\alpha^2n-2\alpha\sum_{i=1}^n{w_ib_i}+cα2n−2αi=1∑nwibi+c的最小值.
对于固定的\alpha>0α>0,
只要\displaystyle\sum_{i=1}^n{w_ib_i}i=1∑nwibi最大就好了.
显然b_i=sign(w_i)bi=sign(wi)的时候, \displaystyle\sum_{i=1}^n{w_ib_i}=\sum_{i=1}^{n}|w_i|i=1∑nwibi=i=1∑n∣wi∣最大.
进一步的, 上面显然是一个关于\alphaα的二次方程,
于是当\alpha=\frac{1}{n}\displaystyle\sum_{i=1}^n{w_ib_i}=\frac{1}{n}\displaystyle\sum_{i=1}^{n}{|w_i|}α=n1i=1∑nwibi=n1i=1∑n∣wi∣时,
取到最大值.
化简下, 可以得到最小值是\sum_{i=1}^n{w_i^2}-\frac{1}{n}(\displaystyle\sum_{i=1}^{n}|w_i|)^2∑i=1nwi2−n1(i=1∑n∣wi∣)2
#include<iostream>
#include<cstdio>
using namespace std;
typedef long long ll;
ll a[100005];
ll gcd(ll a,ll b)
{
return b==0?a:gcd(b,a%b);
}
int main()
{
ll t,n;
scanf("%lld",&t);
while(t--)
{
scanf("%lld",&n);
ll s1=0,s2=0,sum=0;
for(int i=1;i<=n;i++)
{
scanf("%lld",&a[i]);
s1+=a[i]*a[i];
if(a[i]>=0)
sum+=a[i];
else
{
sum-=a[i];
}
}
s2=sum*sum;
ll t=gcd(n*s1-s2,n);
printf("%lld/%lld\n",(n*s1-s2)/t,n/t);
}
}
results in object recognition and detection.
Convolutional neural networks show reliable results on object recognition and detection that are useful in real world applications. Concurrent to the recent progress in recognition, interesting advancements have been happening in virtual reality (VR by Oculus),
augmented reality (AR by HoloLens), and smart wearable devices. Putting these two pieces together, we argue that it is the right time to equip smart portable devices with the power of state-of-the-art recognition systems. However, CNN-based recognition systems
need large amounts of memory and computational power. While they perform well on expensive, GPU-based machines, they are often unsuitable for smaller devices like cell phones and embedded electronics.
In order to simplify the networks, Professor Zhang tries to introduce simple, efficient, and accurate approximations to CNNs by binarizing the weights. Professor Zhang needs your help.
More specifically, you are given a weighted vector W=(w1,w2,...,wn).
Professor Zhang would like to find a binary vector B=(b1,b2,...,bn) (bi∈{+1,−1}) and
a scaling factor α≥0 in
such a manner that ∥W−αB∥2 is
minimum.
Note that ∥⋅∥ denotes
the Euclidean norm (i.e. ∥X∥=x21+⋯+x2n−−−−−−−−−−√,
where X=(x1,x2,...,xn)).
Input
There are multiple test cases. The first line of input contains an integer T,
indicating the number of test cases. For each test case:
The first line contains an integers n (1≤n≤100000) --
the length of the vector. The next line contains n integers: w1,w2,...,wn (−10000≤wi≤10000).
Output
For each test case, output the minimum value of ∥W−αB∥2 as
an irreducible fraction "p/q"
where p, q are
integers, q>0.
Sample Input
3
4
1 2 3 4
4
2 2 2 2
5
5 6 2 3 4
Sample Output
5/1
0/1
10/1
Author
zimpha
展开式子, \left|
W-\alpha B \right|^2=\displaystyle\alpha^2\sum_{i=1}^{n}b_i^2-2\alpha\sum_{i=1}^n{w_ib_i}+\sum_{i=1}^{n}w_i^2∥W−αB∥2=α2i=1∑nbi2−2αi=1∑nwibi+i=1∑nwi2.
由于b_i\in
{+1,-1}bi∈{+1,−1},
那么\displaystyle\sum_{i=1}^{n}b_i^2=ni=1∑nbi2=n,
显然c=\displaystyle\sum_{i=1}^{n}w_i^2c=i=1∑nwi2也是常数.
转化成求\displaystyle\alpha^2n-2\alpha\sum_{i=1}^n{w_ib_i}+cα2n−2αi=1∑nwibi+c的最小值.
对于固定的\alpha>0α>0,
只要\displaystyle\sum_{i=1}^n{w_ib_i}i=1∑nwibi最大就好了.
显然b_i=sign(w_i)bi=sign(wi)的时候, \displaystyle\sum_{i=1}^n{w_ib_i}=\sum_{i=1}^{n}|w_i|i=1∑nwibi=i=1∑n∣wi∣最大.
进一步的, 上面显然是一个关于\alphaα的二次方程,
于是当\alpha=\frac{1}{n}\displaystyle\sum_{i=1}^n{w_ib_i}=\frac{1}{n}\displaystyle\sum_{i=1}^{n}{|w_i|}α=n1i=1∑nwibi=n1i=1∑n∣wi∣时,
取到最大值.
化简下, 可以得到最小值是\sum_{i=1}^n{w_i^2}-\frac{1}{n}(\displaystyle\sum_{i=1}^{n}|w_i|)^2∑i=1nwi2−n1(i=1∑n∣wi∣)2
#include<iostream>
#include<cstdio>
using namespace std;
typedef long long ll;
ll a[100005];
ll gcd(ll a,ll b)
{
return b==0?a:gcd(b,a%b);
}
int main()
{
ll t,n;
scanf("%lld",&t);
while(t--)
{
scanf("%lld",&n);
ll s1=0,s2=0,sum=0;
for(int i=1;i<=n;i++)
{
scanf("%lld",&a[i]);
s1+=a[i]*a[i];
if(a[i]>=0)
sum+=a[i];
else
{
sum-=a[i];
}
}
s2=sum*sum;
ll t=gcd(n*s1-s2,n);
printf("%lld/%lld\n",(n*s1-s2)/t,n/t);
}
}
相关文章推荐
- 2016 Multi-University Training Contest 2 1011 Keep On Movin
- http://blog.csdn.net/jijiji000111/article/details/47971879
- 【HDU】5734 Acperience(2016 Multi-University Training Contest 2)
- 2016 Multi-University Training Contest 2 1009 It's All In The Mind
- 快速解决"is marked as crashed and should be repaired"故障
- hdu 5734 Acperience 2016 Multi-University Training Contest 2
- v$session 视图中的 FAILOVER_TYPE 字段
- 2016 Multi-University Training Contest 2 It's All In The Mind
- 2016 Multi-University Training Contest 2 La Vie en rose
- 2016 Multi-University Training Contest 2 1001 Acperience
- failover机制
- 2016 Multi-University Training Contest 2----解题报告
- [Web 性能] repaint and reflow (重绘和回流)
- JAX-WS HandlerChain使用详解
- HDU5319 Painter【DFS】
- 2016 Multi-University Training Contest 2
- 最近的人工智能计算工作
- 人工智能计算
- LintCode_516 Paint House II
- LeetCode 172. Factorial Trailing Zeroes