Feed aggregator

Building PHP Projects on AWS CodeBuild

Ben Ramsey - Sat, 03/12/2016 - 00:00
At AWS re:Invent, Amazon announced a new service for building and testing code: AWS CodeBuild. They provide managed environments for Android, Java, Python, Ruby, Golang, and Node.js. While PHP is missing, it is possible to build PHP projects using the service. Follow along to find out how.
Categories: Not PHP-GTK

shpub 0.3.0 released

CWeiske - Thu, 22/09/2016 - 23:00
shpub 0.3.0 released
Categories: Community

Teaching Ozlo about Pokémon GO

Anant Narayanan's blog - Thu, 04/08/2016 - 01:00

Pokémon GO is all the rage these days. Ozlo, your friendly AI sidekick, would be remiss if he didn’t help you catch them all!

Thanks to Ozlo’s unique, knowledge-based approach to the world, we were able to teach him about Pokémon in just under a week, including how to find PokéStops and Pokémon Gyms near places you might be going. In this blog post, we’ll take a look at some of Ozlo’s inner workings, what goes into teaching him a completely new concept, and why his ability to learn quickly matters.

The process involves three high-level steps:

  • Feeding Ozlo data about the new concept
  • Teaching Ozlo to understand how people talk about the concept
  • Teaching Ozlo how to talk to people about what it knows

We’ll cover each of these steps one-by-one and then discuss why it’s important we do things this way — and why that makes Ozlo fundamentally different than many other chatbots and AI assistants out there.


Ozlo’s view of the world consists of entities (people, places, or things) and relationships among them. Teaching Ozlo about something new begins with acquiring data about the subject that so we can augment his knowledge of the world. This can happen by several means — crawling the web, hitting APIs, and obtaining data from partners, for example.

In Pokémon GO’s case, we decided to focus on a use-case that helps you play the game effectively but doesn’t break it or cheat in any way: finding PokéStops. PokéStops are places all around the world, and they have certain attributes that identify them: coordinates, a name, picture and sometimes a description.

Once we found all the PokéStops in the US, we turned them into entities and started creating relationships. Ozlo already knows about all the cities in the US as well as what landmarks and restaurants exist in each city. With this knowledge, he can perform reasoning to know that if a given place is inside the polygon for a given city’s boundary, then the place must be in the city (and so on…)

When this process concludes, Ozlo has a mental map of where all the PokéStops in the US are located, which of them are “gyms”, what cities they are in, and which landmarks and restaurants they are near.


Next, we had to teach Ozlo some of the common ways in which humans might ask him about PokéStops. In the beginning that involves just writing out some examples and telling Ozlo what each of them mean.

Consider the following sentence, resembling something a human might ask Ozlo:

“Show me pokemon gyms near the ferry building”

There’s a lot in that sentence that Ozlo can already understand! He has a basic understanding of the English language, but also knows how people talk about restaurants and landmarks (since we taught him that earlier). What does Ozlo see in that sentence?

“show me”: Here’s a hint that the answer to this question requires some sort of visual presentation.

“near”: I’ve seen this word many times before and when it is followed by a name of a place, I know what that means.

“ferry building”: Looks like I have many entities that match this name. But, I can rank all the places with this name by their popularity and distance from where the user currently location to narrow down a likely candidate.

The only part of that sentence Ozlo didn’t quite understand was “pokemon gyms”. This is where we step in and give him some examples along with what they mean:

“pokestops”: This means entities that are PokéStops

“pokemon gyms”: This means entities that are PokéStops of type “gym”

We also added many more variations of the above to give him a basic understanding of PokéStops. And don’t forget — Ozlo also keeps learning as you use him — so he’ll collect a lot more examples over time than what we just start him off with!


The final step was to teach Ozlo how to turn his answer into words and interactions that humans can understand. In many ways this is exactly the reverse of Ozlo trying to translate what a human said into terms he can understand.

Ozlo already has a good knowledge of English, so he can mostly construct the sentence on his own. We just need to give him a few hints and we get:

“There are many Pokémon Gyms around Ferry Building”

Then we construct the visual format of the response. In our iOS app we settled on using the “multi-pin map” element, which is an easy way to view several points of interest in a given geographic area. For now, we just tell Ozlo what type of visual result format to use, based on the user’s device.

Ozlo’s capabilities aren’t limited to just rendering maps though - he can choose between a variety of output formats - and we pick the one that’s best suited to the medium you’re using to communicate with him.

Why This Matters

Why go to all this effort to actually teach Ozlo about PokéStops instead of just having Ozlo redirect your question to some other service? We’ve talked about the multi-agent problem before — and we believe there is a fundamental difference between bots that know things and bots that guess what other services might know about things.

As Ozlo’s knowledge of the world grows, adding more data to it enriches his entire world view. There’s a network effect between entities — because these entities have relationships with each other — adding new entities has an exponential effect on Ozlo’s understanding of the world. This is what lets us leverage the fact that Ozlo already knows about “the Ferry Building” to help you find out what PokéStops are near it with only a minimal amount of effort.

We can’t wait for the day where we’re not the only ones teaching Ozlo about new concepts! In the meantime, please keep using Ozlo and giving him feedback to help him continue to learn more about the world.

Categories: Not PHP-GTK

Meet Ozlo

Anant Narayanan's blog - Thu, 12/05/2016 - 01:00

Two days ago, a project I’ve been working on for a little over two years was unveiled to the world. Meet Ozlo, your friendly AI sidekick!

First things first: if you haven’t signed up yet, hit up this link which includes a VIP code to fast-track you into our invite-only app.

A lot has been said about Ozlo already: Charles Jolley (co-founder), John Lilly (investor), Lloyd Hilaiel (friend & colleague), Todd Agulnick (friend & colleague) and even Buzzfeed! Here’s my perspective…


It didn’t take me very long since I first heard the idea for a better mobile search experience from Mike John calls Mike an “anytime, anything, anywhere” person, and it couldn’t be truer. and Charles to stop what I was doing and jump on board.

The fundamental problem we’re trying to solve is that even though our smart phones enable us to do a lot more than we could before, the process of finding people, places and things on them is not very different from how you would do it on a desktop.

That’s usually the natural course for any technology to take. Take publishing for instance – when tablets were first introduced – a publication’s first instinct was to just take what they had on paper and turn it into pixels. The first application on any new platform usually is a v1 – “available here too” – product. This first version often under-utilizes the platform’s true capabilities and its creators can quickly be lulled into thinking that they’ve created the optimal experience for the consumer.

What’s v2 for search on mobile devices? To answer this question is why we created Ozlo.


In attempting to answer this question, we built something that we thought might work. It didn’t work quite as well as we’d have liked. So we did it again. And again. Fast-forward two years and you arrive at Ozlo: a personal and intelligent companion that helps you find things.

The first manifestation of that idea is an iOS application that can help you find food. In the app, you interact with Ozlo via a chat-like interface. Here I am trying to find that place that I can’t quite remember the name of:

This iteration of the app is purposely focused on one goal – finding you food. But there are several underlying themes that have the potential to pave Ozlo’s way to something grander:


Searching for something is usually not a one-shot type of activity. Humans don’t work that way. We ask a question, and often follow up with more questions; until we’ve refined our own thoughts to ultimately get the answer we’re looking for. It’s exciting that Ozlo has the potential to participate in this back-and-forth.

It’s not merely a co-incidence that my previous blog post, a little less than a year ago was about the Amazon Echo. I have an Alexa at home and I love it. I can’t wait for the day where I can talk to Ozlo like I can talk to Alexa, only better!


Ozlo has the potential to know you over time, learn about your preferences and interests in a meaningful way. To me, this brings a face to the otherwise utilitarian search box that feels disconnected and impersonal.

As a vegetarian, I can already appreciate Ozlo helping me find hidden gems at restaurants I’d usually dismiss. What if Ozlo could also recommend movies for me to watch, grab that hard to get restaurant reservation and help me find the perfect anniversary gift?


In the past few years, technology seems like it’s finally getting to the point where building an agent that can really understand what humans say is tantalizingly close to being possible. We’ve observed the resurrection of the term “AI” to refer to this sort of thing. It’s often an overloaded term, but there is no doubt that the industry as a whole has made big technological strides in deep learning and machine intelligence.

Ozlo is different from usual search engines, the ones that return results with the same words as your query, without knowing what the words mean. Ozlo tries to understand what you said and then tries to arrive at an answer. To me, that makes Ozlo intelligent.

Training Ozlo to understand the nuances of human language is going to be a very difficult task. But it is by no means impossible, given the resources we (as computer engineers and scientists) have at our disposal these days.


The really interesting bits are in the technology behind Ozlo and how we built it. This is some of the deepest technology I’ve ever had a part in building and I’m extremely proud of it. To make Ozlo work, we’ve had to write several pieces of software from scratch.

On the backend:

  • Data Pipeline:
    to ingest, dedupe and glean structure from the mess of data we find; at scale; with speed.
  • Search Engine:
    to index the facts our data pipeline emits and allow us to efficiently query it; at scale, with speed.
  • Query Understanding:
    to turn human language into a series of structured queries machines can understand.
  • Dialog System:
    to keep track of the high-level structure of the conversation you’re having with Ozlo.

On the frontend:

  • Language Synthesis:
    to turn structured results back into friendly text humans can understand.
  • Layout Language:
    to efficiently and generatively render results as a graphical layout.
  • View Synthesis:
    to aggregate, refine and generate the final layout humans will see.
  • iOS App:
    to turn that layout back into pixels that are delightful to look at and interact with.

We built most of our backend in Go. It’s no secret that I’ve been a fan of Go since its inception, primarily because of my affinity to Plan 9; but this is the first time I’ve been able to observe it being used at a large scale for a production-quality project. I couldn’t be happier with our choice, and I’ll admit that I’ve had some days where I get into work only because I’m excited by the prospect of writing some Go.

We built most of our frontend in NodeJS (and ObjC for the iOS app, of course). It’s also no secret that I’ve been a huge proponent of Javascript and our frontend has been chugging along happily (we’ve had a few refactorings, but really, what JS code base doesn’t go through atleast two?). Say what you will about JS & NPM, especially in the recent past, one cannot deny the convenience and speed of development that is offered by the JS runtime.

If any part of this sounds exciting to you, why not join me in working on Ozlo?


Launches are really fun. But the best part comes right after. As we see how people use and interact with Ozlo, I can’t wait to see where we take him next. Movies? Music? Sports? Products? Ozlo in the car? On Alexa at home? Most likely something we haven’t thought of yet.

Come be a part of it – don’t forget to use the VIP code to sign up – and please send us your feedback!

Categories: Not PHP-GTK

Fixed broken links

PHP-GTK on Github - Mon, 25/05/2015 - 10:49
Fixed broken links
Categories: Community

The Hack vs. The MVP

Crisscott - Mon, 10/06/2013 - 18:42

We build lots of features for GSN.com. We do that with a surprisingly small number of folks. Our users play tens of millions of games on a daily basis. Our team consists of one designer, a product manager, four developers, two QA folks and a couple of Ops people. There’s a lot to manage and improve for those ten folks. That means we often have to make trade offs in order to get stuff out to production.

During a recent feature discussion session, we were debating whether we should go quick and dirty or build something “the right way.” I reiterated that the team was responsible for the end deliverable and that they have to stand by the decisions they make. I explained that they were empowered and required to do what they think is best for the business. Sometimes quick and dirty is the best way; sometimes it isn’t. One of the developers on the team had this to say, “I am trying to figure out when we do MVP and when we don’t.” That comment was innocent enough, but it really struck me.

The engineering team here at GSN had associated MVP with quick and dirty. Unfortunately, I suspect this isn’t unique just to our team. I suspect this comes from a deep seeded misconception that the basic principle of an MVP is about saving time. Additionally, there is a related misconception that a hacking something together saves time. Neither of these are really true.


The idea behind building a minimum viable product is to ensure that you and your team focus on the right things. It is a big waste of time to build something that people don’t find valuable. Building an MVP is a way to create a product that has just enough features to be able to tell if users will ultimately find enough value in your product to use it. If you spend six months building the most awesome and complete piece of software the world has ever seed, but users don’t really have a need for it, you have wasted six months. However, if you spend one month to build just enough of a product to get users’ feedback and subsequently determine that your users have no need for an integrated cheese making schedule, you have saved significant energy and money.

An MVP is about features, not speed to market. You build an MVP because you can never be sure what your users want or need until you get something in their hands. You do not build an MVP because you are pressed for time. Determining what features go into your MVP is not done by looking at a schedule and seeing who is available to do the work. Feature sets are determined by looking critically at the plan and deciding if each individual feature is truly required to gauge the product’s value.

The Hack

Hacking something together is all about saving time (in the short term). To say that you have hacked something together says nothing of the features set. It does not mean that you have left things out or that you have determined that a specific feature isn’t really necessary after all. A hack is all about short term turnaround on delivery.

Hacks typically come at the cost of long term maintainability. In our conversation a few weeks ago, the discussion centered around whether we added a few if statements to existing pages (the hack), or took the time to break things down into more manageable and interchangeable chunks (the “right” way). We had a short window for delivery and we knew that the interchangeable chunks approach, while not an overly complicated task, would take more time. Both solutions however would provide the same feature set. Our decision was not “MVP or no MVP.” It was “hack or no hack.”


In the end, we decided to go the “right” way. We had enough time and we knew that the investment in cleaning things up now would pay off the next time around. More important than what we ended up implementing, however was coming to a consensus on how we wanted to build software. We want to build small and test things out before we put too much time and effort into a product, but we also want to invest in our future now. Sure, there may be times when we decide that a hack and the “right” way are the same thing. We may do things quick and dirty in order to get stuff our the door, but we will come back to them quickly and clean them up. Our rule can now be summarized as: MVP always; hack it as a last resort.

Categories: Community

Why I love WebSockets

Crisscott - Mon, 25/03/2013 - 23:01

When I was in school, passing notes took some effort. First, you needed to find a piece of paper that was large enough for your message, but small enough that it could be folded into the requisite football shape. Next, you had to write something. Anything smaller than several sentences just wasn’t worth the overhead, so you had to write about half a page’s worth of stuff, or draw a picture large and detailed enough to make it worth it. After that, you set about the process of folding your note into the aforementioned form. Finally, you had to negotiate with your neighbor to get the note from your desk to its final destination. All that was just to send the message. On the receiving side, the note was unfolded and read. Then your counterpart would go about constructing a response, refolding the note, and negotiating the return trip.


Passing notes in class was a task that required effort, skill and time. You sent a message and you waited for a response. If you thought of something new that you really needed to say, you had to wait until the response came back. At that point, you could alter your original message or add new content. While your note was in transit or being read and replied to on the other end, you had no control. You were at the mercy of the medium over which you were forced to communicate. Note passing simply isn’t designed to allow for short, quick, asynchronous communication.

Nowadays, kids just text each other on their smartphones. They send messages quickly and easily without having to invest in all that overhead. After a bit of upfront work to get someone’s phone number, the back channel classroom chatter flows freely. That is, until someone forgets to silence their phone and the teacher confiscates everything with a battery.

Just as the methods of slacking off in school have evolved, so have methods of communicating over the Web. HTTP is the note passing of the Internet. It works well enough for most communications, and when the message is large enough, the overhead is minimal. However, it is less efficient for smaller messages. The headers included by browsers these days can easily outweigh the message body.

Also, just like note passing, HTTP is synchronous. The client sends a request and waits until the server responds. If there is something new to be said, a new request is initiated. If the server has something to add, it has to wait until it is asked. It can’t simply send a message when it is ready.

WebSockets are the smartphone to HTTP’s notes. They let you send information quickly and easily. Why go through all that folding when you can simply send a text to say “idk, wut do u think we should do?” Why use 1K of headers when all you want to know is, “Did someone else kill my rat?” Better yet, why ask at all? Why not have the server tell you when the rat has been killed by someone else?

WebSockets are made for small messages. They are made for asynchronous communications. They are made for the types of applications users expect these days. That’s why I like WebSockets so much. They let me communicate without overhead or rigorous process. I can write an application that is free from request/response pairs. I can write an application that responds as quickly as my users can act. I can write the applications that I like to write.

Categories: Community

D is for Documentation

Crisscott - Sat, 02/03/2013 - 16:07

Code is the way in which humans tell computers what to do. Lots of effort has gone into making code easier for humans to read in the form of high level languages like Java or C++ and scripting languages like PHP, Ruby, and Python. Despite mankind’s best efforts, writing code is still clearly an exercise for talking to computers. It has not evolved to the point where talking to a computer is as easy and natural as talking to other people.

That’s why documentation is so important. Programming languages are just a translation of a developer’s intent into something a computer can execute. The code may show the computer what you intended for it to do, but the context is lost when another developer comes back to it later. Computers don’t know what to do with context. If they did, the days of Skynet would already be upon us. Humans can process context and it makes the process of dissecting and understanding a computer program much easier.

I find it both sad and hilarious when I see a speaker evangelizing code without comments. Invariably, the speaker shows a slide with five lines of code and spends ten minutes explaining its form and function. Even the simplest and most contrived examples from some of the foremost experts in the field require context and explanation.

When a bug decides to show itself at three in the morning, in code that someone else wrote, context and intent are two very powerful tools. When bugs are found the question, “What was this supposed to do?” is more common than “What is this thing doing?” Figuring out what it is doing is easier when you have good log data to go on. Knowing what it was supposed to do is something only the original developer can tell you.

If you aren’t aware of the concept of Test Driven Development, I strongly recommend you dig into it. In summary, tests are written before the code to ensure that they code matches the business requirements. I would like to propose a complimentary development driver: Documentation Driven Development. By writing out the code as comments first, you can ensure that the context of the development process will be captured. For example, I start writing code with a docblock like this:

/** * Returns the array of AMQP arguments for the given queue. * * Depending on the configuration available, we may have one or more arguments which * need to be sent to RabbitMQ when the queue is declared. These arguments could be * things like high availability configurations. * * If something in getInstance() is failing, check here first. Trying to declare a * queue with a set of arguments that does not match the arguments which were used * the first time the queue was declared most likely will not work. Check the config * for AMQP and make sure that the arguments have not been changed since the queue * was originally created. The easiest way to reset them is to kill off the queue * and try to recreate it based on the new config. * * @param string $name The name of the queue which will be used as a key in configs. * * @return array The array of arguments from the config. */

Next I dive into the method body itself:

private static function _getQueueArgs($name) { // Start with nothing. // We may need to set some configuration arguments. // Check for queue specific args first and then try defaults. We will log where we // found the data. // Return the args we found. }

After that, I layer in the actual code:

/** * Returns the array of AMQP arguments for the given queue. * * Depending on the configuration available, we may have one or more arguments which * need to be sent to RabbitMQ when the queue is declared. These arguments could be * things like high availability configurations. * * If something in getInstance() is failing, check here first. Trying to declare a * queue with a set of arguments that does not match the arguments which were used * the first time the queue was declared most likely will not work. Check the config * for AMQP and make sure that the arguments have not been changed since the queue * was originally created. The easiest way to reset them is to kill off the queue * and try to recreate it based on the new config. * * @param string $name The name of the queue which will be used as a key in configs. * * @return array The array of arguments from the config. */ private static function _getQueueArgs($name) { static::$logger->trace('Entering ' . __FUNCTION__); // Start with nothing. $args = array(); // We may need to set some configuration arguments. $cfg = Settings\AMQP::getInstance(); // Check for queue specific args first and then try defaults. We will log where we // found the data. if (array_key_exists($name, $cfg['queue_arguments'])) { $args = $cfg['queue_arguments'][$name]; static::$logger->info('Queue specific args found for ' . $name); } elseif (array_key_exists('default', $cfg['queue_arguments'])) { $args = $cfg['queue_arguments']['default']; static:$logger->info('Default args used for ' . $name); } // Return the args we found. static::$logger->trace('Exiting ' . __FUNCTION__ . ' on success.'); return $args; }

The final result is a small method which is well documented and little if any extra time to write.

Armed with data from logs, unit tests which ensure functionality, configurations to control execution, isolation switches to lock down features, and contextual information in the form inline documentation, the process of finding bugs becomes easier. LUCID code communicates as if it were a member of the development team. It does all the things you expect from a coworker. It talks, it makes commitments, it works around problems and keeps a record of both what it is doing and why it is doing it.

Categories: Community
Syndicate content