“AI will be huge in terms of its impact and helpfulness in social media analytics”

Tam Su
Tam Su

Interview with Tam Su, Senior Director of Product at Crimson Hexagon, a social media analytics company with headquarters in Boston, US.

Hi Tam, what is your background and what is included in your current role at Crimson Hexagon?

I have been in tech for about 18 years, having helped start a few. I am currently the senior director of product at Crimson Hexagon. My responsibilities include product design, strategy and user experience.

What differs Crimson Hexagon from other social media analytics platforms?

Great question. Our biggest strength over the other offerings is the strength of our tech platform and the vastness of our data store (the amount of data we store ourselves). Our CTO jokes that we have more public data than anyone on earth, except for the NSA. We have the best analysis platform, and a superior foundation, that is where we lead.

Traditionally, we are weaker on the user experience front, so we have been working hard for the last year and a half to overcome that. Look for exciting announcements from us in the coming months.

You recently received an investment of $20M in Crimson Hexagon; how will that affect your product in the near future?

As mentioned, expect exciting announcements in the next two to three months, as we launch our new offerings. These have less to do with our funding, and more with the fact that we’ve been focused on delivering a world-class user experience.

The funding will accelerate our efforts across the board – across product lines, technology, data sources, and our research pipeline as we get more into deep learning and related AI efforts.

What are your greatest challenges ahead at Crimson Hexagon when it comes to serving your customers analysis and develop your offer?

Great question. The greatest challenges for any company in an accelerated growth stage after a big funding event is the danger of not staying focused on the important stuff. In other words, a lot of companies have the tendency to say – we have the money now to spend on things that we didn’t have before, so let’s go after things that maybe we shouldn’t.

We need to stay laser focused on the most important set of needs for our customers. We continue to concentrate on social listening and analysis, with a special focus around insights derived from that data, as we build a road map to better predict customer behaviors using social data.

As we look to the future, we aim to become more prescriptive in what we can recommend to our clients.

About a year ago you introduced image analysis and logo detection. How well has that played out when it comes to quality and client adaptation?

A year ago we launched a rudimentary product; over the last year, we have sharpened those skills and continue to improve the quality of our product in terms of accuracy, volume, etc. It’s not as widely adapted as we hoped, which has to do with analysis maturity. Image analysis is in the early days; this has become an important source of data, but many of our clients are not at that point yet in terms of their journey in social media analysis.

What are the next steps when it comes to enhancing the use of image analysis?

We have an exciting pipeline, looking at scene, object and facial detection. We can tell if a person is in a restaurant or on a mountain top based on a scene. We can detect not only logos, but also items, allowing us to tell if a Starbucks logo is on a mug or on a can of coffee.

Facial detection now allows us to detect sentiment without having to analyze associated text, which may not even exist. If people smile, we know they’re happy, if they are frowning, we know they are not. Being able to discern sentiment straight from the image is huge.

How do you think the development in AI will affect the social media analytics business?

It will be huge in terms of its impact and helpfulness; it’s going to impact a number of areas, ranging from the analysis itself to certain recommendations that we may be able to offer our users. For example, this type of data is best paired with these types of charts. Another example is the idea of being able to recommend actions based on what we see. We can say that this tweet from an influencer is likely to be further amplified with certain promotional; if you promote it, it will likely be amplified by 10 times than what it is now. To be able to predict that and make recommendations is huge.

Another frontier is looking at data information and data ingestion, we are thinking about proprietary data and how AI can help us ingest proprietary corporate data, like chat logs, more efficiently, and be able to then sort through that data, make sense of it, and analyze it more effectively.

Which social platforms do you see having the most potential in the future?

We are very bullish on Tumblr, the third largest network by active users. We have a partnership with them, where we get the entire firehose. We are very excited to see their active user growth trends, and where the entire network will go.

Beyond that, Twitter and Instagram will continue to have bright futures. We are partners with Twitter, and have access to all their data. Instagram is challenging because of their limited API, but hopefully that will change.

Are there any social platforms that are closed today that you would be interested in tapping into for monitoring that would benefit your customers?

Instagram and Facebook are headaches for all of us. We all would like to know what is going on inside Facebook, with respect to the privacy of Facebook’s users. It would be great to get a greater peek in than what we are able to with current channels.

What kind of data, that you would need to do even better analysis, is the hardest to get hold of?

Facebook and Instagram; beyond that, we are curious about the various messaging apps, such as Snapchat, because our clients are becoming curious about them. We would love to get behind the curtains there, with respect to privacy restrictions, to understand what people are talking about and care about.

What kind of data or media that you do not have monitoring on today, can be interesting in the future?

The biggest thing in that regard is working on making our ability to work with proprietary corporate data more robust. We currently have an ingestion mechanism that works. Making it more open and robust so companies can gain greater value is a goal.

How do you think the media monitoring and social media analytics industry will change in the next five years?

The next five years will be very exciting for the whole industry as it grows and matures. A lot of unknowns will be shaken out of the system. Perhaps the biggest change is the ability for platforms leveraging AI to predict scenarios and outcomes in order to prescribe and recommend actions. So, in the next five years we should see that technology developing and maturing in a more visible way.

By Renata Ilitsky

Easier to tap into the global colorful blogosphere

Multi-Ethnic Group Taking a Selfie at Holi Festival
Colorful groupie

We have now made it a lot easier for you to access the world’s biggest pool of active blogs.

Just sign up on our site and instantly receive an API key that gives you access to more than 6 million active blogs and their posts. We also take pride in adding 15,000 new active blogs every day, which is important due to the high turnover in the blogosphere.

We know that you want to get started swiftly in retrieving blog data for your evaluation and use, and we have therefore recently developed API clients in the most common programming languages.

The blogs are divided by spoken languages and they are all assigned a blog rank, based on how influential they are in the colorful blogosphere.

For you to be able to compare your current coverage of blog data, we have made our coverage in the major languages publicly available. Just let us know if you need any input regarding coverage in other languages.

So if you are looking to extend your current coverage of blogs, get your API key today and start exploring. 🙂

By Pontus Edenberg

With this one simple trick we got 5 more API clients

We believe that in order to provide a good API service you will need sensible API endpoints, but more importantly you will also need well-tested, supported and easy-to-use API clients. With this in mind we were motivated to provide API clients for the programming languages most likely to be used, and being used, when integrating with us.

GitHub repositoriesOver the past few months we have released five new Twingly Search API clients, supplementing our already existing Ruby client. Check them out at GitHub.

But how did we do it? Are we proficient in C#, Java, Ruby, Python, PHP and Node.js? We wish. Inspired by the likes of Diffbot we decided to use Upwork to help us find freelancers which would make our vision come to life.

Our process was something along these lines:

  1. Improve the Ruby client so that it could be used as a reference implementation
  2. Decide upon on which freelancer platform to use, we evaluated a handful of them
  3. Determine a maximum cost per client
  4. Write a job listing for one of the programming languages, this included researching things such as: which packaging systems are there? CI services? Which versions should we support?
  5. Post the job listing and wait a few days for the job applications to drop in
  6. Choose among the candidates, usually more than 10 persons applied for each job
  7. Hire the most suitable candidate, for example we looked at references from earlier freelancing jobs, example projects, GitHub profile and so on. We noticed it was quite common with empty, or skeleton, repositories on GitHub, requiring some efforts on our part to find actual code 🙂
  8. Review the work, repeat until both parties are satisfied
  9. Accept the work/API client, including source code in a GitHub repository
  10. Repeat steps 4-9 for every programming language, for each iteration improve the job listing with lessons learned and links to the newly implemented client

It is worth mentioning that we spent plenty of time reviewing the work for a few reasons:

  • We want consistent behavior for all of our clients.
  • We want our customers to have a good user experience.
  • We need to understand the code in order to be able to maintain it.
  • We are curious creatures and are always eager to learn new things.

For others venturing into the freelancing world, here are some of the most important things we learned from this experience:

  • Be explicit with the requirements. We discovered that for some freelancers a vague “point at an existing implementation” would work just fine and lead to the expected outcome but for others it would not turn out the way you anticipated.
  • If you are using GitHub and pull-requests, do not create the GitHub repository and let the freelancer submit a large initial pull-request, those are impossible to review (or well, they are hard to give feedback on). Rather, either let the freelancer start out in an own repository or invite them to collaborate on your repository. It is much easier to review and provide feedback when using GitHub’s issues as opposed to comments on a large pull-request.
  • Depending upon your use-case, you might not need to spend as much time reviewing the work as we did. Remember that curiosity is a double-edged sword.

If you are interesting in more information, please leave a comment and we’ll make sure to respond!

By Robin Wallin