# [R] Manually calculate SVM

Noah Silverman noah at smartmediacorp.com
Thu Mar 25 23:59:42 CET 2010

```Thanks Steve,

1) I get that the kernel is a "normal function".  But my understanding
was that the kernel created a higher dimensional space than the original
data, thus allowing the SVM to be a "pseudo-linear" classifier in that
higher dimension.  So, if the the kernel is the dot_product do I iterate
through all the values of X?  For example  x_i * x_i+1 * x_1+2 etc?

2) I was looking at the SVM tool in the gputools package.  it looks
great, but only delivers a predicted class {0,1} and I want values.  So
my thought was to calculate them on my own.  If I have a SV with 10
elements and a single coefficient for it and am using an RBF kernel, how
would I put that together in R??

Thanks!!

-N

On 3/25/10 3:33 PM, Steve Lianoglou wrote:
>> 1) how calculation of the kernel happens.
>>
> The kernel is just a "normal function" (though not every function is a
> proper kernel function): it takes two values (each value being a
> vector (or something) representing an example) and returns a real
>
>
>> 2) how to calculate the predicted value (y_hat) given a list of support
>> vectors and coefficients.
>>
> Sum over all S support vectors (SV): coef_s * label_s *
> kernel_function(SV_s, example)
> Then add the bias term to that value
>
>
>> I've seen all the formulas and many of the books.  I get most of it
>> conceptually.  Where I'm having trouble is making the leap from concept
>> to actual use.  Ideally, I'd love to code some of the basic stuff in R
>> or Perl in scratch.  It won't be efficient, but will better help me
>> understand just how actual values are manipulated.
>> I know this isn't the function of the list, but was hoping that someone
>> could point me toward some good resources or offer some suggestion.
>>
> Go watch Andrew Ng's Machine Learning lectures (they are free online).
> I think even one of his homework problems was to implement a "simple"
> SVM solver (though I could be mistaken)
>
>

```