Feed aggregator

phancap, self-hosted website screenshot service

CWeiske - Thu, 17/04/2014 - 23:00
phancap, self-hosted website screenshot service
Categories: Community

Will Encryption Catch on With Keybase?

Ben Ramsey - Sat, 22/03/2014 - 07:14

Email is not secure. Let’s stop fooling ourselves. Just because I use Gmail, and I’m using it over HTTPS does not mean that the email I send or receive is encrypted while being transmitted outside of Google’s network. Inside Google’s network, even, the contents are not encrypted.1 So, why do we keep sending sensitive information through email, and why do our banks and mortgage brokers and HR departments keep asking for us to send our Social Security number, bank accounts, and other private details through email?

Is it because we are oblivious, naïve, or do we just not care? I suspect it’s a little of all three, but mainly it’s because encryption is hard, and the difficulty barrier keeps us from adopting it.

The alpha launch of Keybase has got me excited. It uses the public-key cryptography (a.k.a. PGP/GnuPG) model to identify yourself, prove your identity, and allow others to vouch for your identity. I hope it paves the way to making encryption easier for us all, from the technologically-skilled to the technologically-challenged.

How Public-key Encryption Works

I want people to send me sensitive information, but I don’t want anyone else to read it while the information is traveling across the Internet. So, I create a pair of keys. One is public; I can send it to others. One is private; I should keep it secret and safe, like the most secret password I’ve ever had.

I give my public key to someone who wants to send me sensitive information, like a Social Security number. They encrypt a file using my public key and send the encrypted file to me. I can decrypt it, since I have the private key that’s paired with the public key used to encrypt the file. I’m the only one in the world who can read the file, and that’s great because I was the intended recipient.

Here’s what’s important: even if someone intercepts the file, they cannot read it because they do not have the private key to decrypt the message. Even if they have my public key, they cannot decrypt it. The information is safe!

A second benefit of encryption is that I can sign my messages to other people, using my private key. If the recipient has my public key, they can verify the signature. If the signature is bogus, they know I didn’t send the message, but if it checks out, they can be certain I sent the message. No one can forge my signature. Using the signature ensures the message hasn’t been tampered with and the recipient hasn’t been fooled into thinking they’ve received a message from me that is really spam (or worse).

A third benefit is the web of trust. Others may validate my public key by signing it with their own key. These signatures are then added to public key servers as additional proofs that the keys in question do, in fact, belong to their real owners. This helps others know whether a signed message from me is actually coming from the real me and not just someone claiming to be me with a false key. The web of trust is decentralized, with key servers around the world.

Encryption Is Hard

While encryption provides massive benefits, it is difficult even for seasoned technologists to perform, much less everyone else. This is because the tools we use for encryption often require basic knowledge of how encryption works. Command line tools and mail and browser plugins may be used to encrypt and decrypt messages using your public/private key pair, but these tools are all afterthoughts, things that must be installed and maintained by a user who knows what they are doing.

In order to gain mass adoption of encryption, it needs to be made central to the applications and platforms we use, and we need the ability to use it easily without fully understanding it. It needs to just work.

How Keybase Fits In

I think Keybase is taking steps toward making encryption work for everyone. Keybase is like a key server with much more. I’m excited about what it could become and what it means for the technology community.

With the alpha launch, here are a few of the things Keybase provides:

  • Identity verification with your Twitter and GitHub accounts
  • Tracking of users to vouch for their identities
  • In-browser tools to help you encrypt/decrypt messages to/from other users
  • Command-line tools to help you encrypt/decrypt messages to/from other users and to streamline and make encryption easier to use (than with the standard GnuPG tools)

Will Keybase result in mass adoption of encryption? No, but it might get technologists and early adopters excited to start using encryption more regularly. The coolness factor could cause encryption to finally catch on in the tech community. Then our community will build the tools necessary to make it easier for our friends, family, and the rest of the world to use encryption.

Here are a few thoughts I gathered from my short time using Keybase.

  • Keybase allows you to upload your private key to the service for use in encypting/decrypting through the browser. They use a JavaScript library to encrypt your private key on the client-side before sending it to their service, but you never know what some other browser plugin or cross-site scripting attack is doing with your data. I advise against this. Use the Keybase command line tools instead. This will ensure your private key is safely kept on your computer.

  • While the Keybase concept of tracking other users is similar to following from Twitter, it also allows you to sign another user’s key. This is like the web of trust I mentioned, but it doesn’t ask for a level of trust when signing the keys. In my opinion, this is a flaw in Keybase’s design. The web of trust is important to encryption. No one has been driving the web of trust forward, and that’s partly why encryption has been neglected and forgotten. Keybase is in a unique position to drive adoption of the web of trust. I think tracking should remain, but it would be a form of loose trust. I should be able to say that I fully trust another user’s key as belonging to them—maybe they gave me their public key in person, so I know without a doubt it’s theirs—and that trust would be paramount to the system.

  • Keybase is like a key server, but keys uploaded to Keybase are not distributed to the other key servers. If someone on Keybase signs my key, indicating they trust it, this is also not propagated to the other key servers. For the public-key web of trust to work, Keybase needs to play nicely with the already decentralized body of key servers.

  • I’d like to know if Keybase has any plans for physically verifying proof of one’s identity. I’m not sure how this would work in practice, but I could see it as a very useful service, helping to boost the trust level of my key and user account.

I’ve been hoping for a long time that someone would help solve the encryption problem, making it easier for everyone to use. I don’t think Keybase will solve the problem for everyone, but I do think they are raising awareness and could help generate excitement and buzz within the tech community, getting more of us to begin using encryption regularly. When we all start using encryption, then we can drive the rest of the world to use it, making all of our data and ourselves a lot safer.

Be sure to check out my profile on Keybase, and feel free to send me an encrypted message.

Disclaimer: I am not a representative of Keybase. I am just an early user of the service who is excited about what it could become.

  1. A recent announcement from Google explains that “every single email message you send or receive—100% of them—is encrypted while moving internally.”

Categories: Not PHP-GTK

bdrem is available

CWeiske - Sat, 22/03/2014 - 00:00
bdrem is available
Categories: Community

Dates Are Hard

Ben Ramsey - Sat, 22/02/2014 - 18:17

No, I’m not talking about a meeting with a lover or potential lover. While those can be stressful, the calendar math used to determine the precise date and time on which such a meeting might occur is infinitely more difficult to perform. To software programmers, this isn’t news, but I recently encountered an issue when calculating the time for an RFC 4122 UUID that had me questioning the accuracy of our modern, accepted calendars, especially with regard to the days of the week on which our dates fall.

I was working on a simple bug fix for my rhumsaa/uuid PHP library. All tests passed locally, so I assumed the tests would pass in Travis CI after I pushed them to the repository. After all, I hadn’t made any changes to the library; I had just moved a few things in the composer.json file.

But then I received a broken build email from Travis CI. I clicked on the build link to see what had happened, and I saw this:

Notice how the tests passed in PHP 5.3 and HHVM, but they failed in PHP versions 5.4 and 5.5. I was doubly confused, since I was running my local tests against PHP 5.5.4, but they were failing on Travis CI in 5.5!

The confusion doesn’t stop there. I took a look at the test failures. There were three of them. Each was some variation of this:

1 2 3 4 5 6 7 2) Rhumsaa\Uuid\UuidTest::testGetDateTime Failed asserting that two strings are equal. --- Expected +++ Actual @@ @@ -'Sun, 16 Oct 1582 16:34:04 +0000' +'Sat, 16 Oct 1582 16:34:04 +0000'

It’s expecting Sunday but was getting Saturday for the very same date. How could that be?

At this point, I should explain why I’m checking for this specific date. It’s not an arbitrary choice. Version 1 UUIDs are based on timestamps that are created from 100-nanosecond intervals since 00:00:00 UTC on October 15, 1582. Again, UUID doesn’t arbitrarily use this date. It’s an important date in history. It is the first day of the Gregorian calendar.

For my unit tests, I chose to test a few of the earliest dates that could possibly be used to create UUIDs. I chose to use static date strings, since I didn’t expect the dates to change. I used PHP to generate the date strings in RFC 2822 format:

1 2 php > var_dump(gmdate('r', strtotime('1582-10-16T16:34:04+00:00'))); string(31) "Sun, 16 Oct 1582 16:34:04 +0000"

And my tests included code that looked like this:

1 2 3 $uuid = Uuid::fromString('0901e600-0154-1000-9b21-0800200c9a66'); $this->assertInstanceOf('\DateTime', $uuid->getDateTime()); $this->assertEquals('Sun, 16 Oct 1582 16:34:04 +0000', $uuid->getDateTime()->format('r'));

Using these date strings, I was under the mistaken impression that systems know on what day of the week any particular date is supposed to fall. The system on which I ran this PHP code was convinced that the 16th of October in 1582 was a Sunday, so I trusted this.

The 16th of October in 1582 was not a Sunday, however. It was, in fact, a Saturday. And the 15th of October in 1582 was not a Saturday (as these same systems reported) but, rather, a Friday. When Travis CI reported two of my builds as broken, it was because these systems were accurately reporting the day of the week.

It gets stranger, though. The Unix cal program doesn’t seem to know the correct day of the week for these dates, either:

1 2 3 4 5 6 7 8 $ cal 10 1582 October 1582 Su Mo Tu We Th Fr Sa 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31

What’s going on here? To make a long story even longer, Gregory’s calendar reforms sought to correct a drift in the date of the vernal equinox, and to correct this shift and place it back on March 21st, ten days were removed from the calendar at the time of adoption of the Gregorian calendar. Thus, October 4, 1582 falls on a Thursday and the very next day is October 15, 1582, which is a Friday.

The Unix cal program doesn’t show this removal of dates in October 1582, so dates 5-14 are still in place. As a result, how can we be certain that our current days of the week fall on the correct dates? It’s clearly wrong. Where does Unix cal fix this?

It turns out, the Unix cal program follows Great Britain’s adoption of the Gregorian calendar. Great Britain (and its American colonies at the time) adopted the Gregorian calendar in September 1752, and the cal program shows this:

1 2 3 4 5 6 $ cal 9 1752 September 1752 Su Mo Tu We Th Fr Sa 1 2 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30

Still, while cal uses Great Britain’s adoption date, the Unix date command appears to use Gregory’s adoption date, but it doesn’t remove the dates 5-14 in October 1582. Therefore, while the 15th falls on a Friday, the 4th falls on a Monday, ten days earlier. October 14, 1582 shouldn’t exist, but it does:

1 2 3 4 5 6 $ TZ=UTC date -d "1582-10-15T00:00:00.00Z" Fri Oct 15 00:00:00 UTC 1582 $ TZ=UTC date -d "1582-10-04T00:00:00.00Z" Mon Oct 4 00:00:00 UTC 1582 $ TZ=UTC date -d "1582-10-14T00:00:00.00Z" Thu Oct 14 00:00:00 UTC 1582

So, the mystery is solved, and it makes sense why this happens, but it means that it’s tricky to determine the day of the week for dates in the distant past. As for my tests, I dropped the use of the RFC 2822 date format. I didn’t need to test the day of the week. I just needed to test the date. Switching to the ISO 8601 format eliminated the problem for me.

However, this still doesn’t answer why some builds of PHP report October 15, 1582 as occurring on a Friday, while others report it as being a Saturday. Perhaps Derick can help answer that. :–)

Categories: Not PHP-GTK

Year In Review - 2013 Edition

Anant Narayanan's blog - Tue, 31/12/2013 - 09:00

It’s been a very sparse year for the blog, this is only my third post in 2013. It’s certainly not due to lack of things to write about. Quite the contrary, this has been my most eventful year yet! It just means I need to get more disciplined about writing.

Here’s my eleventh hour attempt to recap the major events in my life in 2013.

First off, I got married! That consumed most of the last quarter of the year, though I was fortunate enough to not have as much work or planning to do as my wife or my parents. We also went on a lovely cruise (my first) right after a big fat Indian wedding, both of which were unforgettable experiences. I can highly recommend cruises to anyone looking for a stress-free vacation. Once you’re on the boat, there’s little to worry about other than the odd feeling of just how relaxed you are (I caught myself thinking “Aren’t I supposed to be doing something?” on several occasions).

Secondly, I completed 1 year at Firebase. It’s been quite a ride. I’m incredibly proud of everything we’ve accomplished in the year I’ve been here, and excited for what’s in store. Over the past year we’ve grown from oh-7-can-fit-in-a-room-easy mode to holy-crap-we-need-a-new-office mode. We’re up to 15 people now, with more in the pipeline and I suspect we’re going to grow a lot more in 2014 (if anybody knows of a great office space in SoMa, San Francisco; please drop me a line). Working at a startup can bring with it an amalgam of emotions, and a range of knowledge that I wouldn’t have been exposed to if I had taken a more traditional career path, for which I am grateful. I couldn’t have asked for a better set of people to share my first startup experience with.

I’m also incredibly happy that most of the code I write at Firebase is open source. Contributing to open source projects is how I got my start as a professional programmer, and being able to give back to the community as part of my day job is a rare and fulfilling opportunity.

Another fun thing about working at Firebase: I got to play with the “Google Languages” Go and Dart as part of my job this year. The last time I used Go in any non-trivial capacity was when it first came out - 4 years ago - to write a code generator. The language has since hit the magic “1.0” version and it was a pleasure to revisit (it has a real package manager now, among other things). Dart is a bit younger but also considered mature enough that it is also “1.0” and there’s now a port of Angular to it. Dart was similarly fun to work with; really, what new language is not fun to use, especially in the first week?

The 9fan in me loves Go, but the Mozillian in me wants to root for Rust. Likewise, while I’m excited about Dart (especially since the Angular port), the Mozillian in me wants to always bet on JS. Nevertheless, 2014 is going to be an interesting year on the programming languages front.

Finally, 2013 was also the year where I started understanding more about the Quantified Self movement. I had always been collecting a bunch of data about myself, unwittingly, or at-least for a purpose other than looking through it retrospectively to improve my life. I started using more of these services this year, but haven’t been happy with any particular one so far. They’re all extremely fragmented and don’t let me do as much with the raw data as I’d like. I think it’s still early days and we’re going to see a lot of improvement in the coming years. Can’t wait!

Lots of other great experiences this year: Made so many amazing new friends, Jet-skiing at Key West, Dining at Opaque and The Restaraunt at Meadowood, Blizzcon and the Hearthstone beta, and of course my favorite movie of the year: The Desolation of Smaug (my wife nods in agreement).

Happy new year to you all, and here’s to a wonderful 2014!

Categories: Not PHP-GTK

The Hack vs. The MVP

Crisscott - Mon, 10/06/2013 - 18:42

We build lots of features for GSN.com. We do that with a surprisingly small number of folks. Our users play tens of millions of games on a daily basis. Our team consists of one designer, a product manager, four developers, two QA folks and a couple of Ops people. There’s a lot to manage and improve for those ten folks. That means we often have to make trade offs in order to get stuff out to production.

During a recent feature discussion session, we were debating whether we should go quick and dirty or build something “the right way.” I reiterated that the team was responsible for the end deliverable and that they have to stand by the decisions they make. I explained that they were empowered and required to do what they think is best for the business. Sometimes quick and dirty is the best way; sometimes it isn’t. One of the developers on the team had this to say, “I am trying to figure out when we do MVP and when we don’t.” That comment was innocent enough, but it really struck me.

The engineering team here at GSN had associated MVP with quick and dirty. Unfortunately, I suspect this isn’t unique just to our team. I suspect this comes from a deep seeded misconception that the basic principle of an MVP is about saving time. Additionally, there is a related misconception that a hacking something together saves time. Neither of these are really true.

The MVP

The idea behind building a minimum viable product is to ensure that you and your team focus on the right things. It is a big waste of time to build something that people don’t find valuable. Building an MVP is a way to create a product that has just enough features to be able to tell if users will ultimately find enough value in your product to use it. If you spend six months building the most awesome and complete piece of software the world has ever seed, but users don’t really have a need for it, you have wasted six months. However, if you spend one month to build just enough of a product to get users’ feedback and subsequently determine that your users have no need for an integrated cheese making schedule, you have saved significant energy and money.

An MVP is about features, not speed to market. You build an MVP because you can never be sure what your users want or need until you get something in their hands. You do not build an MVP because you are pressed for time. Determining what features go into your MVP is not done by looking at a schedule and seeing who is available to do the work. Feature sets are determined by looking critically at the plan and deciding if each individual feature is truly required to gauge the product’s value.

The Hack

Hacking something together is all about saving time (in the short term). To say that you have hacked something together says nothing of the features set. It does not mean that you have left things out or that you have determined that a specific feature isn’t really necessary after all. A hack is all about short term turnaround on delivery.

Hacks typically come at the cost of long term maintainability. In our conversation a few weeks ago, the discussion centered around whether we added a few if statements to existing pages (the hack), or took the time to break things down into more manageable and interchangeable chunks (the “right” way). We had a short window for delivery and we knew that the interchangeable chunks approach, while not an overly complicated task, would take more time. Both solutions however would provide the same feature set. Our decision was not “MVP or no MVP.” It was “hack or no hack.”

Conclusion

In the end, we decided to go the “right” way. We had enough time and we knew that the investment in cleaning things up now would pay off the next time around. More important than what we ended up implementing, however was coming to a consensus on how we wanted to build software. We want to build small and test things out before we put too much time and effort into a product, but we also want to invest in our future now. Sure, there may be times when we decide that a hack and the “right” way are the same thing. We may do things quick and dirty in order to get stuff our the door, but we will come back to them quickly and clean them up. Our rule can now be summarized as: MVP always; hack it as a last resort.

Categories: Community

Why I love WebSockets

Crisscott - Mon, 25/03/2013 - 23:01

When I was in school, passing notes took some effort. First, you needed to find a piece of paper that was large enough for your message, but small enough that it could be folded into the requisite football shape. Next, you had to write something. Anything smaller than several sentences just wasn’t worth the overhead, so you had to write about half a page’s worth of stuff, or draw a picture large and detailed enough to make it worth it. After that, you set about the process of folding your note into the aforementioned form. Finally, you had to negotiate with your neighbor to get the note from your desk to its final destination. All that was just to send the message. On the receiving side, the note was unfolded and read. Then your counterpart would go about constructing a response, refolding the note, and negotiating the return trip.

http://www.flickr.com/photos/kmorely

Passing notes in class was a task that required effort, skill and time. You sent a message and you waited for a response. If you thought of something new that you really needed to say, you had to wait until the response came back. At that point, you could alter your original message or add new content. While your note was in transit or being read and replied to on the other end, you had no control. You were at the mercy of the medium over which you were forced to communicate. Note passing simply isn’t designed to allow for short, quick, asynchronous communication.

Nowadays, kids just text each other on their smartphones. They send messages quickly and easily without having to invest in all that overhead. After a bit of upfront work to get someone’s phone number, the back channel classroom chatter flows freely. That is, until someone forgets to silence their phone and the teacher confiscates everything with a battery.

Just as the methods of slacking off in school have evolved, so have methods of communicating over the Web. HTTP is the note passing of the Internet. It works well enough for most communications, and when the message is large enough, the overhead is minimal. However, it is less efficient for smaller messages. The headers included by browsers these days can easily outweigh the message body.

Also, just like note passing, HTTP is synchronous. The client sends a request and waits until the server responds. If there is something new to be said, a new request is initiated. If the server has something to add, it has to wait until it is asked. It can’t simply send a message when it is ready.

WebSockets are the smartphone to HTTP’s notes. They let you send information quickly and easily. Why go through all that folding when you can simply send a text to say “idk, wut do u think we should do?” Why use 1K of headers when all you want to know is, “Did someone else kill my rat?” Better yet, why ask at all? Why not have the server tell you when the rat has been killed by someone else?

WebSockets are made for small messages. They are made for asynchronous communications. They are made for the types of applications users expect these days. That’s why I like WebSockets so much. They let me communicate without overhead or rigorous process. I can write an application that is free from request/response pairs. I can write an application that responds as quickly as my users can act. I can write the applications that I like to write.

Categories: Community

D is for Documentation

Crisscott - Sat, 02/03/2013 - 16:07

Code is the way in which humans tell computers what to do. Lots of effort has gone into making code easier for humans to read in the form of high level languages like Java or C++ and scripting languages like PHP, Ruby, and Python. Despite mankind’s best efforts, writing code is still clearly an exercise for talking to computers. It has not evolved to the point where talking to a computer is as easy and natural as talking to other people.

That’s why documentation is so important. Programming languages are just a translation of a developer’s intent into something a computer can execute. The code may show the computer what you intended for it to do, but the context is lost when another developer comes back to it later. Computers don’t know what to do with context. If they did, the days of Skynet would already be upon us. Humans can process context and it makes the process of dissecting and understanding a computer program much easier.

I find it both sad and hilarious when I see a speaker evangelizing code without comments. Invariably, the speaker shows a slide with five lines of code and spends ten minutes explaining its form and function. Even the simplest and most contrived examples from some of the foremost experts in the field require context and explanation.

When a bug decides to show itself at three in the morning, in code that someone else wrote, context and intent are two very powerful tools. When bugs are found the question, “What was this supposed to do?” is more common than “What is this thing doing?” Figuring out what it is doing is easier when you have good log data to go on. Knowing what it was supposed to do is something only the original developer can tell you.

If you aren’t aware of the concept of Test Driven Development, I strongly recommend you dig into it. In summary, tests are written before the code to ensure that they code matches the business requirements. I would like to propose a complimentary development driver: Documentation Driven Development. By writing out the code as comments first, you can ensure that the context of the development process will be captured. For example, I start writing code with a docblock like this:

/** * Returns the array of AMQP arguments for the given queue. * * Depending on the configuration available, we may have one or more arguments which * need to be sent to RabbitMQ when the queue is declared. These arguments could be * things like high availability configurations. * * If something in getInstance() is failing, check here first. Trying to declare a * queue with a set of arguments that does not match the arguments which were used * the first time the queue was declared most likely will not work. Check the config * for AMQP and make sure that the arguments have not been changed since the queue * was originally created. The easiest way to reset them is to kill off the queue * and try to recreate it based on the new config. * * @param string $name The name of the queue which will be used as a key in configs. * * @return array The array of arguments from the config. */

Next I dive into the method body itself:

private static function _getQueueArgs($name) { // Start with nothing. // We may need to set some configuration arguments. // Check for queue specific args first and then try defaults. We will log where we // found the data. // Return the args we found. }

After that, I layer in the actual code:

/** * Returns the array of AMQP arguments for the given queue. * * Depending on the configuration available, we may have one or more arguments which * need to be sent to RabbitMQ when the queue is declared. These arguments could be * things like high availability configurations. * * If something in getInstance() is failing, check here first. Trying to declare a * queue with a set of arguments that does not match the arguments which were used * the first time the queue was declared most likely will not work. Check the config * for AMQP and make sure that the arguments have not been changed since the queue * was originally created. The easiest way to reset them is to kill off the queue * and try to recreate it based on the new config. * * @param string $name The name of the queue which will be used as a key in configs. * * @return array The array of arguments from the config. */ private static function _getQueueArgs($name) { static::$logger->trace('Entering ' . __FUNCTION__); // Start with nothing. $args = array(); // We may need to set some configuration arguments. $cfg = Settings\AMQP::getInstance(); // Check for queue specific args first and then try defaults. We will log where we // found the data. if (array_key_exists($name, $cfg['queue_arguments'])) { $args = $cfg['queue_arguments'][$name]; static::$logger->info('Queue specific args found for ' . $name); } elseif (array_key_exists('default', $cfg['queue_arguments'])) { $args = $cfg['queue_arguments']['default']; static:$logger->info('Default args used for ' . $name); } // Return the args we found. static::$logger->trace('Exiting ' . __FUNCTION__ . ' on success.'); return $args; }

The final result is a small method which is well documented and little if any extra time to write.

Armed with data from logs, unit tests which ensure functionality, configurations to control execution, isolation switches to lock down features, and contextual information in the form inline documentation, the process of finding bugs becomes easier. LUCID code communicates as if it were a member of the development team. It does all the things you expect from a coworker. It talks, it makes commitments, it works around problems and keeps a record of both what it is doing and why it is doing it.

Categories: Community

Merge branch 'master' of git.php.net:/php/gtk-src

PHP-GTK on Github - Thu, 08/11/2012 - 17:55
Merge branch 'master' of git.php.net:/php/gtk-src
Categories: Community

Added gtk_window_group_list_windows override - thanks to xektrum

PHP-GTK on Github - Thu, 08/11/2012 - 17:54
m tests/GtkCellLayout/get_cells.phpt Added gtk_window_group_list_windows override - thanks to xektrum
Categories: Community

Fixed the test expected output

PHP-GTK on Github - Thu, 08/11/2012 - 17:44
m tests/GtkCellLayout/get_cells.phpt Fixed the test expected output
Categories: Community

Merge branch 'pull-request/1'

PHP-GTK on Github - Thu, 08/11/2012 - 17:36
m ext/gtk+/gtk-2.12.overrides + tests/GtkCellLayout/get_cells.phpt Merge branch 'pull-request/1'
Categories: Community

Missed yet another gitignore for autotools junk

PHP-GTK on Github - Thu, 08/11/2012 - 17:33
m .gitignore Missed yet another gitignore for autotools junk
Categories: Community

Added brackets around if on gtk_cell_layout_get_cells override in ext/gtk+/gtk-2.12.overrides

PHP-GTK on Github - Wed, 07/11/2012 - 18:53
m ext/gtk+/gtk-2.12.overrides + tests/GtkCellLayout/get_cells.phpt Added brackets around if on gtk_cell_layout_get_cells override in ext/gtk+/gtk-2.12.overrides Added test/GtkCellLayout/get_cells.phpt
Categories: Community

Added gtk_cell_layout_get_cells override

PHP-GTK on Github - Fri, 02/11/2012 - 22:06
m ext/gtk+/gtk-2.12.overrides Added gtk_cell_layout_get_cells override
Categories: Community

Add little hack to make sure user classes get

PHP-GTK on Github - Wed, 15/08/2012 - 16:39
m demos/examples/signals.php m generator/templates.php m main/phpg_gobject.c Add little hack to make sure user classes get their constructor's run when used with libglade or gtkbuilder and that constructors ar never double constructed Also added signal example for using TYPE_PHP_VALUE
Categories: Community

Segfault bug on win32 and windows build system

PHP-GTK on Github - Wed, 15/08/2012 - 02:40
m main/phpg_gvalue.c m win32/config.w32.in m win32/confutils.js Segfault bug on win32 and windows build system issue with the zend destructor using type and getting confused when there was no type - only showed up on release builds on win32 (grrr stupid thing) - also fixed the compiler detection and flags. This probably breaks things for older then 2008 compilers, really don't care
Categories: Community

This officially breaks building with anything less then 5.3

PHP-GTK on Github - Thu, 09/08/2012 - 04:07
m generator/array_printf.php This officially breaks building with anything less then 5.3 with the generator since it uses closures But I can't get rid the of the /e modifier in the regex without using a closure to wrap up that array information so I'm stuck - either I break the generator for 5.2 or it spews on 5.4 Note this doesn't mean you can't still build against 5.2 you just need php 5.3 cli or higher to do the build
Categories: Community

rest of mac fix with nswindow id or null if another backend is used

PHP-GTK on Github - Wed, 08/08/2012 - 02:03
m ext/gtk+/gdk.overrides rest of mac fix with nswindow id or null if another backend is used
Categories: Community

Fix for quartz backend - should return nothing for gdkdrawable xid since the underlying item is an nswindow or nsview but only for gdkwindows

PHP-GTK on Github - Wed, 08/08/2012 - 01:43
m ext/gtk+/gdk.overrides Fix for quartz backend - should return nothing for gdkdrawable xid since the underlying item is an nswindow or nsview but only for gdkwindows
Categories: Community
Syndicate content