Sketch2Code : From Sketch Design On Paper To Website Interface

  • Unique Paper ID: 149669
  • Volume: 7
  • Issue: 1
  • PageNo: 294-297
  • Abstract:
  • Generally when any person want to build an website they have to hire an developer for that. For this user have to spend lot of money and time. But after paying so much money and time sometimes he doesn’t get actual interface as they want. Currently, developer is the bridge between the user’s sketches and the program code development .To overcome such type of problems and restriction Microsoft Artificial Intelligence Team introduces a platform named “SKETCH2CODE”. Transforming a Handwritten Sketch (graphical user interface) created by a designer into computer code is a typical task conducted by a developer in order to build customized websites. In this paper, we show that different machine learning methods can be play major role to train a model end-to-end to generate code from a image given by user with over all different platforms support with help of browsers (i.e. iOS, Android and Windows).we present two approaches which automates this process, one using classical computer vision techniques, and another using a novel application of deep semantic segmentation networks. We uses a dataset of websites which is used to train and evaluate approaches.

Copyright & License

Copyright © 2025 Authors retain the copyright of this article. This article is an open access article distributed under the Creative Commons Attribution License which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.

BibTeX

@article{149669,
        author = {Akash Bharat Wadje and Prof. Rohit Bagh},
        title = {Sketch2Code : From Sketch Design On Paper To Website Interface},
        journal = {International Journal of Innovative Research in Technology},
        year = {},
        volume = {7},
        number = {1},
        pages = {294-297},
        issn = {2349-6002},
        url = {https://ijirt.org/article?manuscript=149669},
        abstract = {Generally when any person want to build an website they have to hire an developer for that. For this user have to spend lot of money and time. But after paying so much money and time sometimes he doesn’t get actual interface as they want. Currently, developer is the bridge between the user’s sketches and the program code development .To overcome such type of problems and restriction Microsoft Artificial Intelligence Team introduces a platform named “SKETCH2CODE”. Transforming a Handwritten Sketch (graphical user interface) created by a designer into computer code is a typical task conducted by a developer in order to build customized websites. In this paper, we show that different machine learning methods can be play major role to train a model end-to-end to generate code from a image given by user  with over all different platforms support with help of browsers (i.e. iOS, Android and Windows).we present two approaches which automates this process, one using classical computer vision techniques, and another using a novel application of deep semantic segmentation networks. We uses a dataset of websites which is used to train and evaluate approaches.},
        keywords = {HTML, Artificial Intelligence, SKETCH2CODE, Graphical User Interface},
        month = {},
        }

Cite This Article

  • ISSN: 2349-6002
  • Volume: 7
  • Issue: 1
  • PageNo: 294-297

Sketch2Code : From Sketch Design On Paper To Website Interface

Related Articles